00:00:00.001 Started by upstream project "autotest-nightly-lts" build number 1717 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 2978 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.104 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.105 The recommended git tool is: git 00:00:00.105 using credential 00000000-0000-0000-0000-000000000002 00:00:00.106 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.149 Fetching changes from the remote Git repository 00:00:00.150 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.198 Using shallow fetch with depth 1 00:00:00.198 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.198 > git --version # timeout=10 00:00:00.240 > git --version # 'git version 2.39.2' 00:00:00.241 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.241 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.241 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.046 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.059 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.070 Checking out Revision 9a23290da272374f14acecb1f0954a7f78afc3cb (FETCH_HEAD) 00:00:06.070 > git config core.sparsecheckout # timeout=10 00:00:06.079 > git read-tree -mu HEAD # timeout=10 00:00:06.095 > git checkout -f 9a23290da272374f14acecb1f0954a7f78afc3cb # timeout=5 00:00:06.112 Commit message: "jenkins/perf: add artifacts cleanup for spdk files" 00:00:06.112 > git rev-list --no-walk 9a23290da272374f14acecb1f0954a7f78afc3cb # timeout=10 00:00:06.194 [Pipeline] Start of Pipeline 00:00:06.207 [Pipeline] library 00:00:06.208 Loading library shm_lib@master 00:00:06.208 Library shm_lib@master is cached. Copying from home. 00:00:06.223 [Pipeline] node 00:00:21.225 Still waiting to schedule task 00:00:21.225 Waiting for next available executor on ‘vagrant-vm-host’ 00:19:24.960 Running on VM-host-WFP7 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 00:19:24.962 [Pipeline] { 00:19:24.976 [Pipeline] catchError 00:19:24.978 [Pipeline] { 00:19:24.995 [Pipeline] wrap 00:19:25.007 [Pipeline] { 00:19:25.015 [Pipeline] stage 00:19:25.017 [Pipeline] { (Prologue) 00:19:25.040 [Pipeline] echo 00:19:25.042 Node: VM-host-WFP7 00:19:25.051 [Pipeline] cleanWs 00:19:25.062 [WS-CLEANUP] Deleting project workspace... 00:19:25.062 [WS-CLEANUP] Deferred wipeout is used... 00:19:25.070 [WS-CLEANUP] done 00:19:25.249 [Pipeline] setCustomBuildProperty 00:19:25.319 [Pipeline] nodesByLabel 00:19:25.321 Found a total of 1 nodes with the 'sorcerer' label 00:19:25.332 [Pipeline] httpRequest 00:19:25.337 HttpMethod: GET 00:19:25.337 URL: http://10.211.164.101/packages/jbp_9a23290da272374f14acecb1f0954a7f78afc3cb.tar.gz 00:19:25.340 Sending request to url: http://10.211.164.101/packages/jbp_9a23290da272374f14acecb1f0954a7f78afc3cb.tar.gz 00:19:25.342 Response Code: HTTP/1.1 200 OK 00:19:25.342 Success: Status code 200 is in the accepted range: 200,404 00:19:25.343 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/jbp_9a23290da272374f14acecb1f0954a7f78afc3cb.tar.gz 00:19:25.480 [Pipeline] sh 00:19:25.763 + tar --no-same-owner -xf jbp_9a23290da272374f14acecb1f0954a7f78afc3cb.tar.gz 00:19:25.781 [Pipeline] httpRequest 00:19:25.786 HttpMethod: GET 00:19:25.786 URL: http://10.211.164.101/packages/spdk_36faa8c312bf9059b86e0f503d7fd6b43c1498e6.tar.gz 00:19:25.787 Sending request to url: http://10.211.164.101/packages/spdk_36faa8c312bf9059b86e0f503d7fd6b43c1498e6.tar.gz 00:19:25.788 Response Code: HTTP/1.1 200 OK 00:19:25.788 Success: Status code 200 is in the accepted range: 200,404 00:19:25.789 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk_36faa8c312bf9059b86e0f503d7fd6b43c1498e6.tar.gz 00:19:27.950 [Pipeline] sh 00:19:28.231 + tar --no-same-owner -xf spdk_36faa8c312bf9059b86e0f503d7fd6b43c1498e6.tar.gz 00:19:30.776 [Pipeline] sh 00:19:31.057 + git -C spdk log --oneline -n5 00:19:31.057 36faa8c31 bdev/nvme: Fix the case that namespace was removed during reset 00:19:31.057 e2cb5a5ee bdev/nvme: Factor out nvme_ns active/inactive check into a helper function 00:19:31.057 4b134b4ab bdev/nvme: Delay callbacks when the next operation is a failover 00:19:31.057 d2ea4ecb1 llvm/vfio: Suppress checking leaks for `spdk_nvme_ctrlr_alloc_io_qpair` 00:19:31.057 3b33f4333 test/nvme/cuse: Fix typo 00:19:31.103 [Pipeline] writeFile 00:19:31.113 [Pipeline] sh 00:19:31.388 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:19:31.398 [Pipeline] sh 00:19:31.677 + cat autorun-spdk.conf 00:19:31.677 SPDK_RUN_FUNCTIONAL_TEST=1 00:19:31.677 SPDK_TEST_NVMF=1 00:19:31.677 SPDK_TEST_NVMF_TRANSPORT=tcp 00:19:31.677 SPDK_TEST_URING=1 00:19:31.677 SPDK_TEST_VFIOUSER=1 00:19:31.677 SPDK_TEST_USDT=1 00:19:31.677 SPDK_RUN_UBSAN=1 00:19:31.677 NET_TYPE=virt 00:19:31.677 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:19:31.683 RUN_NIGHTLY=1 00:19:31.685 [Pipeline] } 00:19:31.701 [Pipeline] // stage 00:19:31.715 [Pipeline] stage 00:19:31.717 [Pipeline] { (Run VM) 00:19:31.731 [Pipeline] sh 00:19:32.010 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:19:32.010 + echo 'Start stage prepare_nvme.sh' 00:19:32.010 Start stage prepare_nvme.sh 00:19:32.010 + [[ -n 2 ]] 00:19:32.010 + disk_prefix=ex2 00:19:32.010 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 ]] 00:19:32.010 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/autorun-spdk.conf ]] 00:19:32.010 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/autorun-spdk.conf 00:19:32.010 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:19:32.010 ++ SPDK_TEST_NVMF=1 00:19:32.010 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:19:32.010 ++ SPDK_TEST_URING=1 00:19:32.010 ++ SPDK_TEST_VFIOUSER=1 00:19:32.010 ++ SPDK_TEST_USDT=1 00:19:32.010 ++ SPDK_RUN_UBSAN=1 00:19:32.010 ++ NET_TYPE=virt 00:19:32.010 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:19:32.010 ++ RUN_NIGHTLY=1 00:19:32.010 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 00:19:32.010 + nvme_files=() 00:19:32.010 + declare -A nvme_files 00:19:32.010 + backend_dir=/var/lib/libvirt/images/backends 00:19:32.010 + nvme_files['nvme.img']=5G 00:19:32.010 + nvme_files['nvme-cmb.img']=5G 00:19:32.010 + nvme_files['nvme-multi0.img']=4G 00:19:32.010 + nvme_files['nvme-multi1.img']=4G 00:19:32.010 + nvme_files['nvme-multi2.img']=4G 00:19:32.010 + nvme_files['nvme-openstack.img']=8G 00:19:32.010 + nvme_files['nvme-zns.img']=5G 00:19:32.010 + (( SPDK_TEST_NVME_PMR == 1 )) 00:19:32.010 + (( SPDK_TEST_FTL == 1 )) 00:19:32.010 + (( SPDK_TEST_NVME_FDP == 1 )) 00:19:32.010 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:19:32.010 + for nvme in "${!nvme_files[@]}" 00:19:32.010 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi2.img -s 4G 00:19:32.010 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:19:32.010 + for nvme in "${!nvme_files[@]}" 00:19:32.010 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-cmb.img -s 5G 00:19:32.010 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:19:32.010 + for nvme in "${!nvme_files[@]}" 00:19:32.010 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-openstack.img -s 8G 00:19:32.010 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:19:32.010 + for nvme in "${!nvme_files[@]}" 00:19:32.010 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-zns.img -s 5G 00:19:32.010 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:19:32.010 + for nvme in "${!nvme_files[@]}" 00:19:32.010 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi1.img -s 4G 00:19:32.010 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:19:32.010 + for nvme in "${!nvme_files[@]}" 00:19:32.010 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi0.img -s 4G 00:19:32.010 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:19:32.010 + for nvme in "${!nvme_files[@]}" 00:19:32.010 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme.img -s 5G 00:19:32.268 Formatting '/var/lib/libvirt/images/backends/ex2-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:19:32.268 ++ sudo grep -rl ex2-nvme.img /etc/libvirt/qemu 00:19:32.268 + echo 'End stage prepare_nvme.sh' 00:19:32.268 End stage prepare_nvme.sh 00:19:32.281 [Pipeline] sh 00:19:32.564 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:19:32.564 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex2-nvme.img -b /var/lib/libvirt/images/backends/ex2-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img -H -a -v -f fedora38 00:19:32.564 00:19:32.564 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk/scripts/vagrant 00:19:32.564 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk 00:19:32.564 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 00:19:32.564 HELP=0 00:19:32.564 DRY_RUN=0 00:19:32.564 NVME_FILE=/var/lib/libvirt/images/backends/ex2-nvme.img,/var/lib/libvirt/images/backends/ex2-nvme-multi0.img, 00:19:32.564 NVME_DISKS_TYPE=nvme,nvme, 00:19:32.564 NVME_AUTO_CREATE=0 00:19:32.564 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img, 00:19:32.564 NVME_CMB=,, 00:19:32.564 NVME_PMR=,, 00:19:32.564 NVME_ZNS=,, 00:19:32.564 NVME_MS=,, 00:19:32.564 NVME_FDP=,, 00:19:32.564 SPDK_VAGRANT_DISTRO=fedora38 00:19:32.564 SPDK_VAGRANT_VMCPU=10 00:19:32.564 SPDK_VAGRANT_VMRAM=12288 00:19:32.564 SPDK_VAGRANT_PROVIDER=libvirt 00:19:32.564 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:19:32.564 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:19:32.564 SPDK_OPENSTACK_NETWORK=0 00:19:32.564 VAGRANT_PACKAGE_BOX=0 00:19:32.564 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:19:32.564 FORCE_DISTRO=true 00:19:32.564 VAGRANT_BOX_VERSION= 00:19:32.564 EXTRA_VAGRANTFILES= 00:19:32.564 NIC_MODEL=virtio 00:19:32.564 00:19:32.564 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora38-libvirt' 00:19:32.564 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 00:19:35.106 Bringing machine 'default' up with 'libvirt' provider... 00:19:35.674 ==> default: Creating image (snapshot of base box volume). 00:19:35.674 ==> default: Creating domain with the following settings... 00:19:35.674 ==> default: -- Name: fedora38-38-1.6-1705279005-2131_default_1713341768_d1bfba6ded8bfcbf4881 00:19:35.674 ==> default: -- Domain type: kvm 00:19:35.674 ==> default: -- Cpus: 10 00:19:35.674 ==> default: -- Feature: acpi 00:19:35.674 ==> default: -- Feature: apic 00:19:35.674 ==> default: -- Feature: pae 00:19:35.674 ==> default: -- Memory: 12288M 00:19:35.674 ==> default: -- Memory Backing: hugepages: 00:19:35.674 ==> default: -- Management MAC: 00:19:35.674 ==> default: -- Loader: 00:19:35.674 ==> default: -- Nvram: 00:19:35.674 ==> default: -- Base box: spdk/fedora38 00:19:35.674 ==> default: -- Storage pool: default 00:19:35.674 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1705279005-2131_default_1713341768_d1bfba6ded8bfcbf4881.img (20G) 00:19:35.674 ==> default: -- Volume Cache: default 00:19:35.674 ==> default: -- Kernel: 00:19:35.674 ==> default: -- Initrd: 00:19:35.674 ==> default: -- Graphics Type: vnc 00:19:35.674 ==> default: -- Graphics Port: -1 00:19:35.674 ==> default: -- Graphics IP: 127.0.0.1 00:19:35.674 ==> default: -- Graphics Password: Not defined 00:19:35.674 ==> default: -- Video Type: cirrus 00:19:35.674 ==> default: -- Video VRAM: 9216 00:19:35.674 ==> default: -- Sound Type: 00:19:35.674 ==> default: -- Keymap: en-us 00:19:35.674 ==> default: -- TPM Path: 00:19:35.674 ==> default: -- INPUT: type=mouse, bus=ps2 00:19:35.674 ==> default: -- Command line args: 00:19:35.674 ==> default: -> value=-device, 00:19:35.674 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:19:35.674 ==> default: -> value=-drive, 00:19:35.674 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme.img,if=none,id=nvme-0-drive0, 00:19:35.674 ==> default: -> value=-device, 00:19:35.674 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:19:35.674 ==> default: -> value=-device, 00:19:35.674 ==> default: -> value=nvme,id=nvme-1,serial=12341, 00:19:35.674 ==> default: -> value=-drive, 00:19:35.674 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:19:35.674 ==> default: -> value=-device, 00:19:35.675 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:19:35.675 ==> default: -> value=-drive, 00:19:35.675 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:19:35.675 ==> default: -> value=-device, 00:19:35.675 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:19:35.675 ==> default: -> value=-drive, 00:19:35.675 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:19:35.675 ==> default: -> value=-device, 00:19:35.675 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:19:35.934 ==> default: Creating shared folders metadata... 00:19:35.934 ==> default: Starting domain. 00:19:37.318 ==> default: Waiting for domain to get an IP address... 00:19:55.401 ==> default: Waiting for SSH to become available... 00:19:55.401 ==> default: Configuring and enabling network interfaces... 00:20:01.994 default: SSH address: 192.168.121.228:22 00:20:01.994 default: SSH username: vagrant 00:20:01.994 default: SSH auth method: private key 00:20:04.529 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:20:12.652 ==> default: Mounting SSHFS shared folder... 00:20:14.557 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:20:14.557 ==> default: Checking Mount.. 00:20:15.969 ==> default: Folder Successfully Mounted! 00:20:15.969 ==> default: Running provisioner: file... 00:20:16.906 default: ~/.gitconfig => .gitconfig 00:20:17.474 00:20:17.474 SUCCESS! 00:20:17.474 00:20:17.474 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora38-libvirt and type "vagrant ssh" to use. 00:20:17.474 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:20:17.474 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora38-libvirt" to destroy all trace of vm. 00:20:17.474 00:20:17.483 [Pipeline] } 00:20:17.505 [Pipeline] // stage 00:20:17.514 [Pipeline] dir 00:20:17.515 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora38-libvirt 00:20:17.516 [Pipeline] { 00:20:17.530 [Pipeline] catchError 00:20:17.532 [Pipeline] { 00:20:17.545 [Pipeline] sh 00:20:17.827 + vagrant ssh-config --host vagrant 00:20:17.827 + sed -ne /^Host/,$p 00:20:17.827 + tee ssh_conf 00:20:20.372 Host vagrant 00:20:20.372 HostName 192.168.121.228 00:20:20.372 User vagrant 00:20:20.372 Port 22 00:20:20.372 UserKnownHostsFile /dev/null 00:20:20.372 StrictHostKeyChecking no 00:20:20.372 PasswordAuthentication no 00:20:20.372 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1705279005-2131/libvirt/fedora38 00:20:20.372 IdentitiesOnly yes 00:20:20.372 LogLevel FATAL 00:20:20.372 ForwardAgent yes 00:20:20.372 ForwardX11 yes 00:20:20.372 00:20:20.426 [Pipeline] withEnv 00:20:20.429 [Pipeline] { 00:20:20.443 [Pipeline] sh 00:20:20.719 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:20:20.719 source /etc/os-release 00:20:20.719 [[ -e /image.version ]] && img=$(< /image.version) 00:20:20.719 # Minimal, systemd-like check. 00:20:20.719 if [[ -e /.dockerenv ]]; then 00:20:20.719 # Clear garbage from the node's name: 00:20:20.719 # agt-er_autotest_547-896 -> autotest_547-896 00:20:20.719 # $HOSTNAME is the actual container id 00:20:20.719 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:20:20.719 if mountpoint -q /etc/hostname; then 00:20:20.719 # We can assume this is a mount from a host where container is running, 00:20:20.719 # so fetch its hostname to easily identify the target swarm worker. 00:20:20.719 container="$(< /etc/hostname) ($agent)" 00:20:20.719 else 00:20:20.719 # Fallback 00:20:20.719 container=$agent 00:20:20.719 fi 00:20:20.719 fi 00:20:20.719 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:20:20.719 00:20:20.991 [Pipeline] } 00:20:21.012 [Pipeline] // withEnv 00:20:21.020 [Pipeline] setCustomBuildProperty 00:20:21.033 [Pipeline] stage 00:20:21.034 [Pipeline] { (Tests) 00:20:21.050 [Pipeline] sh 00:20:21.331 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:20:21.604 [Pipeline] timeout 00:20:21.605 Timeout set to expire in 30 min 00:20:21.606 [Pipeline] { 00:20:21.624 [Pipeline] sh 00:20:21.902 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:20:22.470 HEAD is now at 36faa8c31 bdev/nvme: Fix the case that namespace was removed during reset 00:20:22.482 [Pipeline] sh 00:20:22.762 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:20:23.062 [Pipeline] sh 00:20:23.345 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:20:23.621 [Pipeline] sh 00:20:23.903 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant ./autoruner.sh spdk_repo 00:20:24.162 ++ readlink -f spdk_repo 00:20:24.162 + DIR_ROOT=/home/vagrant/spdk_repo 00:20:24.162 + [[ -n /home/vagrant/spdk_repo ]] 00:20:24.162 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:20:24.162 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:20:24.162 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:20:24.162 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:20:24.162 + [[ -d /home/vagrant/spdk_repo/output ]] 00:20:24.162 + cd /home/vagrant/spdk_repo 00:20:24.162 + source /etc/os-release 00:20:24.162 ++ NAME='Fedora Linux' 00:20:24.162 ++ VERSION='38 (Cloud Edition)' 00:20:24.162 ++ ID=fedora 00:20:24.162 ++ VERSION_ID=38 00:20:24.162 ++ VERSION_CODENAME= 00:20:24.162 ++ PLATFORM_ID=platform:f38 00:20:24.162 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:20:24.162 ++ ANSI_COLOR='0;38;2;60;110;180' 00:20:24.162 ++ LOGO=fedora-logo-icon 00:20:24.162 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:20:24.162 ++ HOME_URL=https://fedoraproject.org/ 00:20:24.162 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:20:24.162 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:20:24.162 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:20:24.162 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:20:24.162 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:20:24.162 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:20:24.162 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:20:24.162 ++ SUPPORT_END=2024-05-14 00:20:24.162 ++ VARIANT='Cloud Edition' 00:20:24.162 ++ VARIANT_ID=cloud 00:20:24.162 + uname -a 00:20:24.162 Linux fedora38-cloud-1705279005-2131 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:20:24.162 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:20:24.162 Hugepages 00:20:24.162 node hugesize free / total 00:20:24.162 node0 1048576kB 0 / 0 00:20:24.162 node0 2048kB 0 / 0 00:20:24.162 00:20:24.162 Type BDF Vendor Device NUMA Driver Device Block devices 00:20:24.421 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:20:24.421 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:20:24.421 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:20:24.421 + rm -f /tmp/spdk-ld-path 00:20:24.421 + source autorun-spdk.conf 00:20:24.421 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:20:24.421 ++ SPDK_TEST_NVMF=1 00:20:24.421 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:20:24.421 ++ SPDK_TEST_URING=1 00:20:24.421 ++ SPDK_TEST_VFIOUSER=1 00:20:24.421 ++ SPDK_TEST_USDT=1 00:20:24.421 ++ SPDK_RUN_UBSAN=1 00:20:24.421 ++ NET_TYPE=virt 00:20:24.421 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:20:24.421 ++ RUN_NIGHTLY=1 00:20:24.421 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:20:24.421 + [[ -n '' ]] 00:20:24.421 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:20:24.421 + for M in /var/spdk/build-*-manifest.txt 00:20:24.421 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:20:24.421 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:20:24.421 + for M in /var/spdk/build-*-manifest.txt 00:20:24.421 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:20:24.421 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:20:24.421 ++ uname 00:20:24.421 + [[ Linux == \L\i\n\u\x ]] 00:20:24.421 + sudo dmesg -T 00:20:24.421 + sudo dmesg --clear 00:20:24.680 + dmesg_pid=5297 00:20:24.680 + [[ Fedora Linux == FreeBSD ]] 00:20:24.680 + sudo dmesg -Tw 00:20:24.680 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:20:24.680 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:20:24.680 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:20:24.680 + [[ -x /usr/src/fio-static/fio ]] 00:20:24.680 + export FIO_BIN=/usr/src/fio-static/fio 00:20:24.680 + FIO_BIN=/usr/src/fio-static/fio 00:20:24.680 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:20:24.680 + [[ ! -v VFIO_QEMU_BIN ]] 00:20:24.680 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:20:24.680 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:20:24.680 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:20:24.680 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:20:24.680 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:20:24.680 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:20:24.680 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:20:24.680 Test configuration: 00:20:24.680 SPDK_RUN_FUNCTIONAL_TEST=1 00:20:24.680 SPDK_TEST_NVMF=1 00:20:24.680 SPDK_TEST_NVMF_TRANSPORT=tcp 00:20:24.680 SPDK_TEST_URING=1 00:20:24.680 SPDK_TEST_VFIOUSER=1 00:20:24.680 SPDK_TEST_USDT=1 00:20:24.680 SPDK_RUN_UBSAN=1 00:20:24.680 NET_TYPE=virt 00:20:24.680 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:20:24.680 RUN_NIGHTLY=1 08:16:57 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:24.680 08:16:57 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:20:24.680 08:16:57 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:24.680 08:16:57 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:24.680 08:16:57 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.680 08:16:57 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.680 08:16:57 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.680 08:16:57 -- paths/export.sh@5 -- $ export PATH 00:20:24.680 08:16:57 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.680 08:16:57 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:20:24.680 08:16:57 -- common/autobuild_common.sh@435 -- $ date +%s 00:20:24.680 08:16:57 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713341817.XXXXXX 00:20:24.680 08:16:57 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713341817.OvdBwu 00:20:24.680 08:16:57 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:20:24.680 08:16:57 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:20:24.680 08:16:57 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:20:24.680 08:16:57 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:20:24.680 08:16:57 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:20:24.680 08:16:57 -- common/autobuild_common.sh@451 -- $ get_config_params 00:20:24.680 08:16:57 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:20:24.680 08:16:57 -- common/autotest_common.sh@10 -- $ set +x 00:20:24.680 08:16:57 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-uring' 00:20:24.680 08:16:57 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:20:24.681 08:16:57 -- spdk/autobuild.sh@12 -- $ umask 022 00:20:24.681 08:16:57 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:20:24.681 08:16:57 -- spdk/autobuild.sh@16 -- $ date -u 00:20:24.681 Wed Apr 17 08:16:57 AM UTC 2024 00:20:24.681 08:16:57 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:20:24.681 LTS-24-g36faa8c31 00:20:24.681 08:16:57 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:20:24.681 08:16:57 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:20:24.681 08:16:57 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:20:24.681 08:16:57 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:20:24.681 08:16:57 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:20:24.681 08:16:57 -- common/autotest_common.sh@10 -- $ set +x 00:20:24.681 ************************************ 00:20:24.681 START TEST ubsan 00:20:24.681 ************************************ 00:20:24.681 using ubsan 00:20:24.681 08:16:57 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:20:24.681 00:20:24.681 real 0m0.000s 00:20:24.681 user 0m0.000s 00:20:24.681 sys 0m0.000s 00:20:24.681 08:16:57 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:20:24.681 08:16:57 -- common/autotest_common.sh@10 -- $ set +x 00:20:24.681 ************************************ 00:20:24.681 END TEST ubsan 00:20:24.681 ************************************ 00:20:24.940 08:16:58 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:20:24.940 08:16:58 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:20:24.940 08:16:58 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:20:24.940 08:16:58 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:20:24.940 08:16:58 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:20:24.940 08:16:58 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:20:24.940 08:16:58 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:20:24.940 08:16:58 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:20:24.940 08:16:58 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-uring --with-shared 00:20:25.198 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:20:25.198 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:20:25.457 Using 'verbs' RDMA provider 00:20:38.610 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:20:53.509 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:20:53.509 Creating mk/config.mk...done. 00:20:53.509 Creating mk/cc.flags.mk...done. 00:20:53.509 Type 'make' to build. 00:20:53.509 08:17:24 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:20:53.509 08:17:24 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:20:53.509 08:17:24 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:20:53.509 08:17:24 -- common/autotest_common.sh@10 -- $ set +x 00:20:53.509 ************************************ 00:20:53.509 START TEST make 00:20:53.509 ************************************ 00:20:53.509 08:17:24 -- common/autotest_common.sh@1104 -- $ make -j10 00:20:53.509 make[1]: Nothing to be done for 'all'. 00:20:53.509 The Meson build system 00:20:53.509 Version: 1.3.1 00:20:53.509 Source dir: /home/vagrant/spdk_repo/spdk/libvfio-user 00:20:53.509 Build dir: /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:20:53.509 Build type: native build 00:20:53.509 Project name: libvfio-user 00:20:53.509 Project version: 0.0.1 00:20:53.509 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:20:53.509 C linker for the host machine: cc ld.bfd 2.39-16 00:20:53.509 Host machine cpu family: x86_64 00:20:53.509 Host machine cpu: x86_64 00:20:53.509 Run-time dependency threads found: YES 00:20:53.509 Library dl found: YES 00:20:53.509 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:20:53.509 Run-time dependency json-c found: YES 0.17 00:20:53.509 Run-time dependency cmocka found: YES 1.1.7 00:20:53.509 Program pytest-3 found: NO 00:20:53.509 Program flake8 found: NO 00:20:53.509 Program misspell-fixer found: NO 00:20:53.509 Program restructuredtext-lint found: NO 00:20:53.509 Program valgrind found: YES (/usr/bin/valgrind) 00:20:53.509 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:20:53.509 Compiler for C supports arguments -Wmissing-declarations: YES 00:20:53.509 Compiler for C supports arguments -Wwrite-strings: YES 00:20:53.509 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:20:53.509 Program test-lspci.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-lspci.sh) 00:20:53.509 Program test-linkage.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-linkage.sh) 00:20:53.510 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:20:53.510 Build targets in project: 8 00:20:53.510 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:20:53.510 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:20:53.510 00:20:53.510 libvfio-user 0.0.1 00:20:53.510 00:20:53.510 User defined options 00:20:53.510 buildtype : debug 00:20:53.510 default_library: shared 00:20:53.510 libdir : /usr/local/lib 00:20:53.510 00:20:53.510 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:20:53.510 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:20:53.510 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:20:53.510 [2/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:20:53.510 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:20:53.510 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:20:53.510 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:20:53.510 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:20:53.510 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:20:53.510 [8/37] Compiling C object samples/null.p/null.c.o 00:20:53.510 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:20:53.775 [10/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:20:53.775 [11/37] Compiling C object samples/server.p/server.c.o 00:20:53.775 [12/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:20:53.775 [13/37] Compiling C object samples/lspci.p/lspci.c.o 00:20:53.775 [14/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:20:53.775 [15/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:20:53.775 [16/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:20:53.775 [17/37] Compiling C object samples/client.p/client.c.o 00:20:53.775 [18/37] Compiling C object test/unit_tests.p/mocks.c.o 00:20:53.775 [19/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:20:53.775 [20/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:20:53.775 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:20:53.775 [22/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:20:53.775 [23/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:20:53.775 [24/37] Linking target samples/client 00:20:53.775 [25/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:20:53.775 [26/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:20:53.775 [27/37] Linking target lib/libvfio-user.so.0.0.1 00:20:53.775 [28/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:20:53.775 [29/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:20:54.039 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:20:54.039 [31/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:20:54.039 [32/37] Linking target test/unit_tests 00:20:54.039 [33/37] Linking target samples/server 00:20:54.039 [34/37] Linking target samples/lspci 00:20:54.039 [35/37] Linking target samples/null 00:20:54.039 [36/37] Linking target samples/gpio-pci-idio-16 00:20:54.039 [37/37] Linking target samples/shadow_ioeventfd_server 00:20:54.039 INFO: autodetecting backend as ninja 00:20:54.039 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:20:54.039 DESTDIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user meson install --quiet -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:20:54.606 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:20:54.606 ninja: no work to do. 00:21:02.720 The Meson build system 00:21:02.720 Version: 1.3.1 00:21:02.720 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:21:02.720 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:21:02.720 Build type: native build 00:21:02.720 Program cat found: YES (/usr/bin/cat) 00:21:02.720 Project name: DPDK 00:21:02.720 Project version: 23.11.0 00:21:02.720 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:21:02.720 C linker for the host machine: cc ld.bfd 2.39-16 00:21:02.720 Host machine cpu family: x86_64 00:21:02.720 Host machine cpu: x86_64 00:21:02.720 Message: ## Building in Developer Mode ## 00:21:02.720 Program pkg-config found: YES (/usr/bin/pkg-config) 00:21:02.720 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:21:02.720 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:21:02.720 Program python3 found: YES (/usr/bin/python3) 00:21:02.720 Program cat found: YES (/usr/bin/cat) 00:21:02.720 Compiler for C supports arguments -march=native: YES 00:21:02.720 Checking for size of "void *" : 8 00:21:02.720 Checking for size of "void *" : 8 (cached) 00:21:02.720 Library m found: YES 00:21:02.720 Library numa found: YES 00:21:02.720 Has header "numaif.h" : YES 00:21:02.720 Library fdt found: NO 00:21:02.720 Library execinfo found: NO 00:21:02.720 Has header "execinfo.h" : YES 00:21:02.720 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:21:02.720 Run-time dependency libarchive found: NO (tried pkgconfig) 00:21:02.720 Run-time dependency libbsd found: NO (tried pkgconfig) 00:21:02.720 Run-time dependency jansson found: NO (tried pkgconfig) 00:21:02.720 Run-time dependency openssl found: YES 3.0.9 00:21:02.720 Run-time dependency libpcap found: YES 1.10.4 00:21:02.720 Has header "pcap.h" with dependency libpcap: YES 00:21:02.720 Compiler for C supports arguments -Wcast-qual: YES 00:21:02.720 Compiler for C supports arguments -Wdeprecated: YES 00:21:02.720 Compiler for C supports arguments -Wformat: YES 00:21:02.720 Compiler for C supports arguments -Wformat-nonliteral: NO 00:21:02.720 Compiler for C supports arguments -Wformat-security: NO 00:21:02.720 Compiler for C supports arguments -Wmissing-declarations: YES 00:21:02.720 Compiler for C supports arguments -Wmissing-prototypes: YES 00:21:02.720 Compiler for C supports arguments -Wnested-externs: YES 00:21:02.720 Compiler for C supports arguments -Wold-style-definition: YES 00:21:02.720 Compiler for C supports arguments -Wpointer-arith: YES 00:21:02.720 Compiler for C supports arguments -Wsign-compare: YES 00:21:02.720 Compiler for C supports arguments -Wstrict-prototypes: YES 00:21:02.720 Compiler for C supports arguments -Wundef: YES 00:21:02.720 Compiler for C supports arguments -Wwrite-strings: YES 00:21:02.720 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:21:02.720 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:21:02.720 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:21:02.720 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:21:02.720 Program objdump found: YES (/usr/bin/objdump) 00:21:02.720 Compiler for C supports arguments -mavx512f: YES 00:21:02.720 Checking if "AVX512 checking" compiles: YES 00:21:02.720 Fetching value of define "__SSE4_2__" : 1 00:21:02.720 Fetching value of define "__AES__" : 1 00:21:02.720 Fetching value of define "__AVX__" : 1 00:21:02.720 Fetching value of define "__AVX2__" : 1 00:21:02.720 Fetching value of define "__AVX512BW__" : 1 00:21:02.720 Fetching value of define "__AVX512CD__" : 1 00:21:02.720 Fetching value of define "__AVX512DQ__" : 1 00:21:02.720 Fetching value of define "__AVX512F__" : 1 00:21:02.721 Fetching value of define "__AVX512VL__" : 1 00:21:02.721 Fetching value of define "__PCLMUL__" : 1 00:21:02.721 Fetching value of define "__RDRND__" : 1 00:21:02.721 Fetching value of define "__RDSEED__" : 1 00:21:02.721 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:21:02.721 Fetching value of define "__znver1__" : (undefined) 00:21:02.721 Fetching value of define "__znver2__" : (undefined) 00:21:02.721 Fetching value of define "__znver3__" : (undefined) 00:21:02.721 Fetching value of define "__znver4__" : (undefined) 00:21:02.721 Compiler for C supports arguments -Wno-format-truncation: YES 00:21:02.721 Message: lib/log: Defining dependency "log" 00:21:02.721 Message: lib/kvargs: Defining dependency "kvargs" 00:21:02.721 Message: lib/telemetry: Defining dependency "telemetry" 00:21:02.721 Checking for function "getentropy" : NO 00:21:02.721 Message: lib/eal: Defining dependency "eal" 00:21:02.721 Message: lib/ring: Defining dependency "ring" 00:21:02.721 Message: lib/rcu: Defining dependency "rcu" 00:21:02.721 Message: lib/mempool: Defining dependency "mempool" 00:21:02.721 Message: lib/mbuf: Defining dependency "mbuf" 00:21:02.721 Fetching value of define "__PCLMUL__" : 1 (cached) 00:21:02.721 Fetching value of define "__AVX512F__" : 1 (cached) 00:21:02.721 Fetching value of define "__AVX512BW__" : 1 (cached) 00:21:02.721 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:21:02.721 Fetching value of define "__AVX512VL__" : 1 (cached) 00:21:02.721 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:21:02.721 Compiler for C supports arguments -mpclmul: YES 00:21:02.721 Compiler for C supports arguments -maes: YES 00:21:02.721 Compiler for C supports arguments -mavx512f: YES (cached) 00:21:02.721 Compiler for C supports arguments -mavx512bw: YES 00:21:02.721 Compiler for C supports arguments -mavx512dq: YES 00:21:02.721 Compiler for C supports arguments -mavx512vl: YES 00:21:02.721 Compiler for C supports arguments -mvpclmulqdq: YES 00:21:02.721 Compiler for C supports arguments -mavx2: YES 00:21:02.721 Compiler for C supports arguments -mavx: YES 00:21:02.721 Message: lib/net: Defining dependency "net" 00:21:02.721 Message: lib/meter: Defining dependency "meter" 00:21:02.721 Message: lib/ethdev: Defining dependency "ethdev" 00:21:02.721 Message: lib/pci: Defining dependency "pci" 00:21:02.721 Message: lib/cmdline: Defining dependency "cmdline" 00:21:02.721 Message: lib/hash: Defining dependency "hash" 00:21:02.721 Message: lib/timer: Defining dependency "timer" 00:21:02.721 Message: lib/compressdev: Defining dependency "compressdev" 00:21:02.721 Message: lib/cryptodev: Defining dependency "cryptodev" 00:21:02.721 Message: lib/dmadev: Defining dependency "dmadev" 00:21:02.721 Compiler for C supports arguments -Wno-cast-qual: YES 00:21:02.721 Message: lib/power: Defining dependency "power" 00:21:02.721 Message: lib/reorder: Defining dependency "reorder" 00:21:02.721 Message: lib/security: Defining dependency "security" 00:21:02.721 Has header "linux/userfaultfd.h" : YES 00:21:02.721 Has header "linux/vduse.h" : YES 00:21:02.721 Message: lib/vhost: Defining dependency "vhost" 00:21:02.721 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:21:02.721 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:21:02.721 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:21:02.721 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:21:02.721 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:21:02.721 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:21:02.721 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:21:02.721 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:21:02.721 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:21:02.721 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:21:02.721 Program doxygen found: YES (/usr/bin/doxygen) 00:21:02.721 Configuring doxy-api-html.conf using configuration 00:21:02.721 Configuring doxy-api-man.conf using configuration 00:21:02.721 Program mandb found: YES (/usr/bin/mandb) 00:21:02.721 Program sphinx-build found: NO 00:21:02.721 Configuring rte_build_config.h using configuration 00:21:02.721 Message: 00:21:02.721 ================= 00:21:02.721 Applications Enabled 00:21:02.721 ================= 00:21:02.721 00:21:02.721 apps: 00:21:02.721 00:21:02.721 00:21:02.721 Message: 00:21:02.721 ================= 00:21:02.721 Libraries Enabled 00:21:02.721 ================= 00:21:02.721 00:21:02.721 libs: 00:21:02.721 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:21:02.721 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:21:02.721 cryptodev, dmadev, power, reorder, security, vhost, 00:21:02.721 00:21:02.721 Message: 00:21:02.721 =============== 00:21:02.721 Drivers Enabled 00:21:02.721 =============== 00:21:02.721 00:21:02.721 common: 00:21:02.721 00:21:02.721 bus: 00:21:02.721 pci, vdev, 00:21:02.721 mempool: 00:21:02.721 ring, 00:21:02.721 dma: 00:21:02.721 00:21:02.721 net: 00:21:02.721 00:21:02.721 crypto: 00:21:02.721 00:21:02.721 compress: 00:21:02.721 00:21:02.721 vdpa: 00:21:02.721 00:21:02.721 00:21:02.721 Message: 00:21:02.721 ================= 00:21:02.721 Content Skipped 00:21:02.721 ================= 00:21:02.721 00:21:02.721 apps: 00:21:02.721 dumpcap: explicitly disabled via build config 00:21:02.721 graph: explicitly disabled via build config 00:21:02.721 pdump: explicitly disabled via build config 00:21:02.721 proc-info: explicitly disabled via build config 00:21:02.721 test-acl: explicitly disabled via build config 00:21:02.721 test-bbdev: explicitly disabled via build config 00:21:02.721 test-cmdline: explicitly disabled via build config 00:21:02.721 test-compress-perf: explicitly disabled via build config 00:21:02.721 test-crypto-perf: explicitly disabled via build config 00:21:02.721 test-dma-perf: explicitly disabled via build config 00:21:02.721 test-eventdev: explicitly disabled via build config 00:21:02.721 test-fib: explicitly disabled via build config 00:21:02.721 test-flow-perf: explicitly disabled via build config 00:21:02.721 test-gpudev: explicitly disabled via build config 00:21:02.721 test-mldev: explicitly disabled via build config 00:21:02.721 test-pipeline: explicitly disabled via build config 00:21:02.721 test-pmd: explicitly disabled via build config 00:21:02.721 test-regex: explicitly disabled via build config 00:21:02.721 test-sad: explicitly disabled via build config 00:21:02.721 test-security-perf: explicitly disabled via build config 00:21:02.721 00:21:02.721 libs: 00:21:02.721 metrics: explicitly disabled via build config 00:21:02.721 acl: explicitly disabled via build config 00:21:02.721 bbdev: explicitly disabled via build config 00:21:02.721 bitratestats: explicitly disabled via build config 00:21:02.721 bpf: explicitly disabled via build config 00:21:02.721 cfgfile: explicitly disabled via build config 00:21:02.721 distributor: explicitly disabled via build config 00:21:02.721 efd: explicitly disabled via build config 00:21:02.721 eventdev: explicitly disabled via build config 00:21:02.721 dispatcher: explicitly disabled via build config 00:21:02.721 gpudev: explicitly disabled via build config 00:21:02.721 gro: explicitly disabled via build config 00:21:02.721 gso: explicitly disabled via build config 00:21:02.721 ip_frag: explicitly disabled via build config 00:21:02.721 jobstats: explicitly disabled via build config 00:21:02.721 latencystats: explicitly disabled via build config 00:21:02.721 lpm: explicitly disabled via build config 00:21:02.721 member: explicitly disabled via build config 00:21:02.721 pcapng: explicitly disabled via build config 00:21:02.721 rawdev: explicitly disabled via build config 00:21:02.721 regexdev: explicitly disabled via build config 00:21:02.721 mldev: explicitly disabled via build config 00:21:02.721 rib: explicitly disabled via build config 00:21:02.721 sched: explicitly disabled via build config 00:21:02.721 stack: explicitly disabled via build config 00:21:02.721 ipsec: explicitly disabled via build config 00:21:02.721 pdcp: explicitly disabled via build config 00:21:02.721 fib: explicitly disabled via build config 00:21:02.721 port: explicitly disabled via build config 00:21:02.721 pdump: explicitly disabled via build config 00:21:02.721 table: explicitly disabled via build config 00:21:02.721 pipeline: explicitly disabled via build config 00:21:02.721 graph: explicitly disabled via build config 00:21:02.721 node: explicitly disabled via build config 00:21:02.721 00:21:02.721 drivers: 00:21:02.721 common/cpt: not in enabled drivers build config 00:21:02.721 common/dpaax: not in enabled drivers build config 00:21:02.721 common/iavf: not in enabled drivers build config 00:21:02.721 common/idpf: not in enabled drivers build config 00:21:02.721 common/mvep: not in enabled drivers build config 00:21:02.721 common/octeontx: not in enabled drivers build config 00:21:02.721 bus/auxiliary: not in enabled drivers build config 00:21:02.721 bus/cdx: not in enabled drivers build config 00:21:02.721 bus/dpaa: not in enabled drivers build config 00:21:02.721 bus/fslmc: not in enabled drivers build config 00:21:02.721 bus/ifpga: not in enabled drivers build config 00:21:02.721 bus/platform: not in enabled drivers build config 00:21:02.721 bus/vmbus: not in enabled drivers build config 00:21:02.722 common/cnxk: not in enabled drivers build config 00:21:02.722 common/mlx5: not in enabled drivers build config 00:21:02.722 common/nfp: not in enabled drivers build config 00:21:02.722 common/qat: not in enabled drivers build config 00:21:02.722 common/sfc_efx: not in enabled drivers build config 00:21:02.722 mempool/bucket: not in enabled drivers build config 00:21:02.722 mempool/cnxk: not in enabled drivers build config 00:21:02.722 mempool/dpaa: not in enabled drivers build config 00:21:02.722 mempool/dpaa2: not in enabled drivers build config 00:21:02.722 mempool/octeontx: not in enabled drivers build config 00:21:02.722 mempool/stack: not in enabled drivers build config 00:21:02.722 dma/cnxk: not in enabled drivers build config 00:21:02.722 dma/dpaa: not in enabled drivers build config 00:21:02.722 dma/dpaa2: not in enabled drivers build config 00:21:02.722 dma/hisilicon: not in enabled drivers build config 00:21:02.722 dma/idxd: not in enabled drivers build config 00:21:02.722 dma/ioat: not in enabled drivers build config 00:21:02.722 dma/skeleton: not in enabled drivers build config 00:21:02.722 net/af_packet: not in enabled drivers build config 00:21:02.722 net/af_xdp: not in enabled drivers build config 00:21:02.722 net/ark: not in enabled drivers build config 00:21:02.722 net/atlantic: not in enabled drivers build config 00:21:02.722 net/avp: not in enabled drivers build config 00:21:02.722 net/axgbe: not in enabled drivers build config 00:21:02.722 net/bnx2x: not in enabled drivers build config 00:21:02.722 net/bnxt: not in enabled drivers build config 00:21:02.722 net/bonding: not in enabled drivers build config 00:21:02.722 net/cnxk: not in enabled drivers build config 00:21:02.722 net/cpfl: not in enabled drivers build config 00:21:02.722 net/cxgbe: not in enabled drivers build config 00:21:02.722 net/dpaa: not in enabled drivers build config 00:21:02.722 net/dpaa2: not in enabled drivers build config 00:21:02.722 net/e1000: not in enabled drivers build config 00:21:02.722 net/ena: not in enabled drivers build config 00:21:02.722 net/enetc: not in enabled drivers build config 00:21:02.722 net/enetfec: not in enabled drivers build config 00:21:02.722 net/enic: not in enabled drivers build config 00:21:02.722 net/failsafe: not in enabled drivers build config 00:21:02.722 net/fm10k: not in enabled drivers build config 00:21:02.722 net/gve: not in enabled drivers build config 00:21:02.722 net/hinic: not in enabled drivers build config 00:21:02.722 net/hns3: not in enabled drivers build config 00:21:02.722 net/i40e: not in enabled drivers build config 00:21:02.722 net/iavf: not in enabled drivers build config 00:21:02.722 net/ice: not in enabled drivers build config 00:21:02.722 net/idpf: not in enabled drivers build config 00:21:02.722 net/igc: not in enabled drivers build config 00:21:02.722 net/ionic: not in enabled drivers build config 00:21:02.722 net/ipn3ke: not in enabled drivers build config 00:21:02.722 net/ixgbe: not in enabled drivers build config 00:21:02.722 net/mana: not in enabled drivers build config 00:21:02.722 net/memif: not in enabled drivers build config 00:21:02.722 net/mlx4: not in enabled drivers build config 00:21:02.722 net/mlx5: not in enabled drivers build config 00:21:02.722 net/mvneta: not in enabled drivers build config 00:21:02.722 net/mvpp2: not in enabled drivers build config 00:21:02.722 net/netvsc: not in enabled drivers build config 00:21:02.722 net/nfb: not in enabled drivers build config 00:21:02.722 net/nfp: not in enabled drivers build config 00:21:02.722 net/ngbe: not in enabled drivers build config 00:21:02.722 net/null: not in enabled drivers build config 00:21:02.722 net/octeontx: not in enabled drivers build config 00:21:02.722 net/octeon_ep: not in enabled drivers build config 00:21:02.722 net/pcap: not in enabled drivers build config 00:21:02.722 net/pfe: not in enabled drivers build config 00:21:02.722 net/qede: not in enabled drivers build config 00:21:02.722 net/ring: not in enabled drivers build config 00:21:02.722 net/sfc: not in enabled drivers build config 00:21:02.722 net/softnic: not in enabled drivers build config 00:21:02.722 net/tap: not in enabled drivers build config 00:21:02.722 net/thunderx: not in enabled drivers build config 00:21:02.722 net/txgbe: not in enabled drivers build config 00:21:02.722 net/vdev_netvsc: not in enabled drivers build config 00:21:02.722 net/vhost: not in enabled drivers build config 00:21:02.722 net/virtio: not in enabled drivers build config 00:21:02.722 net/vmxnet3: not in enabled drivers build config 00:21:02.722 raw/*: missing internal dependency, "rawdev" 00:21:02.722 crypto/armv8: not in enabled drivers build config 00:21:02.722 crypto/bcmfs: not in enabled drivers build config 00:21:02.722 crypto/caam_jr: not in enabled drivers build config 00:21:02.722 crypto/ccp: not in enabled drivers build config 00:21:02.722 crypto/cnxk: not in enabled drivers build config 00:21:02.722 crypto/dpaa_sec: not in enabled drivers build config 00:21:02.722 crypto/dpaa2_sec: not in enabled drivers build config 00:21:02.722 crypto/ipsec_mb: not in enabled drivers build config 00:21:02.722 crypto/mlx5: not in enabled drivers build config 00:21:02.722 crypto/mvsam: not in enabled drivers build config 00:21:02.722 crypto/nitrox: not in enabled drivers build config 00:21:02.722 crypto/null: not in enabled drivers build config 00:21:02.722 crypto/octeontx: not in enabled drivers build config 00:21:02.722 crypto/openssl: not in enabled drivers build config 00:21:02.722 crypto/scheduler: not in enabled drivers build config 00:21:02.722 crypto/uadk: not in enabled drivers build config 00:21:02.722 crypto/virtio: not in enabled drivers build config 00:21:02.722 compress/isal: not in enabled drivers build config 00:21:02.722 compress/mlx5: not in enabled drivers build config 00:21:02.722 compress/octeontx: not in enabled drivers build config 00:21:02.722 compress/zlib: not in enabled drivers build config 00:21:02.722 regex/*: missing internal dependency, "regexdev" 00:21:02.722 ml/*: missing internal dependency, "mldev" 00:21:02.722 vdpa/ifc: not in enabled drivers build config 00:21:02.722 vdpa/mlx5: not in enabled drivers build config 00:21:02.722 vdpa/nfp: not in enabled drivers build config 00:21:02.722 vdpa/sfc: not in enabled drivers build config 00:21:02.722 event/*: missing internal dependency, "eventdev" 00:21:02.722 baseband/*: missing internal dependency, "bbdev" 00:21:02.722 gpu/*: missing internal dependency, "gpudev" 00:21:02.722 00:21:02.722 00:21:02.722 Build targets in project: 85 00:21:02.722 00:21:02.722 DPDK 23.11.0 00:21:02.722 00:21:02.722 User defined options 00:21:02.722 buildtype : debug 00:21:02.722 default_library : shared 00:21:02.722 libdir : lib 00:21:02.722 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:21:02.722 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds 00:21:02.722 c_link_args : 00:21:02.722 cpu_instruction_set: native 00:21:02.722 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:21:02.722 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:21:02.722 enable_docs : false 00:21:02.722 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:21:02.722 enable_kmods : false 00:21:02.722 tests : false 00:21:02.722 00:21:02.722 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:21:02.722 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:21:02.979 [1/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:21:02.979 [2/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:21:02.979 [3/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:21:02.979 [4/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:21:02.979 [5/265] Linking static target lib/librte_kvargs.a 00:21:02.979 [6/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:21:02.979 [7/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:21:02.979 [8/265] Linking static target lib/librte_log.a 00:21:02.979 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:21:03.235 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:21:03.493 [11/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:21:03.493 [12/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:21:03.493 [13/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:21:03.493 [14/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:21:03.493 [15/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:21:03.493 [16/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:21:03.493 [17/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:21:03.751 [18/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:21:03.751 [19/265] Linking static target lib/librte_telemetry.a 00:21:03.751 [20/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:21:03.751 [21/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:21:03.751 [22/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:21:04.009 [23/265] Linking target lib/librte_log.so.24.0 00:21:04.009 [24/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:21:04.009 [25/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:21:04.009 [26/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:21:04.009 [27/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:21:04.009 [28/265] Linking target lib/librte_kvargs.so.24.0 00:21:04.267 [29/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:21:04.267 [30/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:21:04.267 [31/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:21:04.267 [32/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:21:04.524 [33/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:21:04.524 [34/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:21:04.524 [35/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:21:04.524 [36/265] Linking target lib/librte_telemetry.so.24.0 00:21:04.524 [37/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:21:04.524 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:21:04.782 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:21:04.782 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:21:04.782 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:21:04.782 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:21:04.782 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:21:04.782 [44/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:21:04.782 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:21:04.782 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:21:04.782 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:21:05.050 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:21:05.050 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:21:05.320 [50/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:21:05.320 [51/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:21:05.320 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:21:05.320 [53/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:21:05.320 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:21:05.578 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:21:05.578 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:21:05.578 [57/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:21:05.578 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:21:05.578 [59/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:21:05.837 [60/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:21:05.837 [61/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:21:05.837 [62/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:21:05.837 [63/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:21:05.837 [64/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:21:06.095 [65/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:21:06.095 [66/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:21:06.095 [67/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:21:06.095 [68/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:21:06.354 [69/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:21:06.354 [70/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:21:06.354 [71/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:21:06.354 [72/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:21:06.354 [73/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:21:06.354 [74/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:21:06.613 [75/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:21:06.613 [76/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:21:06.613 [77/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:21:06.613 [78/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:21:06.613 [79/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:21:06.613 [80/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:21:06.871 [81/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:21:06.871 [82/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:21:06.871 [83/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:21:07.130 [84/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:21:07.130 [85/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:21:07.130 [86/265] Linking static target lib/librte_eal.a 00:21:07.130 [87/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:21:07.130 [88/265] Linking static target lib/librte_ring.a 00:21:07.389 [89/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:21:07.389 [90/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:21:07.389 [91/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:21:07.389 [92/265] Linking static target lib/librte_rcu.a 00:21:07.389 [93/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:21:07.389 [94/265] Linking static target lib/librte_mempool.a 00:21:07.647 [95/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:21:07.647 [96/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:21:07.647 [97/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:21:07.911 [98/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:21:07.911 [99/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:21:07.911 [100/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:21:07.911 [101/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:21:08.178 [102/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:21:08.178 [103/265] Linking static target lib/librte_mbuf.a 00:21:08.179 [104/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:21:08.179 [105/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:21:08.179 [106/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:21:08.179 [107/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:21:08.437 [108/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:21:08.437 [109/265] Linking static target lib/librte_net.a 00:21:08.696 [110/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:21:08.696 [111/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:21:08.696 [112/265] Linking static target lib/librte_meter.a 00:21:08.696 [113/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:21:08.954 [114/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:21:08.954 [115/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:21:08.954 [116/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:21:08.954 [117/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:21:09.211 [118/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:21:09.211 [119/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:21:09.504 [120/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:21:09.777 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:21:09.777 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:21:10.036 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:21:10.036 [124/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:21:10.036 [125/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:21:10.036 [126/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:21:10.036 [127/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:21:10.036 [128/265] Linking static target lib/librte_pci.a 00:21:10.036 [129/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:21:10.294 [130/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:21:10.294 [131/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:21:10.294 [132/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:21:10.294 [133/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:21:10.294 [134/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:21:10.553 [135/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:21:10.553 [136/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:21:10.553 [137/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:21:10.553 [138/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:21:10.553 [139/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:21:10.553 [140/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:21:10.553 [141/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:21:10.553 [142/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:21:10.553 [143/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:21:10.810 [144/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:21:10.810 [145/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:21:10.810 [146/265] Linking static target lib/librte_cmdline.a 00:21:11.069 [147/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:21:11.069 [148/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:21:11.327 [149/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:21:11.327 [150/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:21:11.327 [151/265] Linking static target lib/librte_ethdev.a 00:21:11.327 [152/265] Linking static target lib/librte_timer.a 00:21:11.327 [153/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:21:11.327 [154/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:21:11.585 [155/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:21:11.585 [156/265] Linking static target lib/librte_compressdev.a 00:21:11.585 [157/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:21:11.585 [158/265] Linking static target lib/librte_hash.a 00:21:11.843 [159/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:21:11.843 [160/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:21:12.100 [161/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:21:12.100 [162/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:21:12.100 [163/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:21:12.100 [164/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:21:12.100 [165/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:21:12.100 [166/265] Linking static target lib/librte_dmadev.a 00:21:12.358 [167/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:21:12.358 [168/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:21:12.358 [169/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:21:12.616 [170/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:21:12.616 [171/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:21:12.616 [172/265] Linking static target lib/librte_cryptodev.a 00:21:12.616 [173/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:21:12.616 [174/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:21:12.874 [175/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:21:12.874 [176/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:21:12.874 [177/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:21:12.874 [178/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:21:12.874 [179/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:21:12.874 [180/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:21:12.874 [181/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:21:13.131 [182/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:21:13.131 [183/265] Linking static target lib/librte_power.a 00:21:13.389 [184/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:21:13.389 [185/265] Linking static target lib/librte_reorder.a 00:21:13.389 [186/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:21:13.389 [187/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:21:13.389 [188/265] Linking static target lib/librte_security.a 00:21:13.389 [189/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:21:13.647 [190/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:21:13.647 [191/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:21:13.906 [192/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:21:14.163 [193/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:21:14.163 [194/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:21:14.163 [195/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:21:14.421 [196/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:21:14.421 [197/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:21:14.421 [198/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:21:14.679 [199/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:21:14.679 [200/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:21:14.679 [201/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:21:14.937 [202/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:21:14.937 [203/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:21:14.937 [204/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:21:14.937 [205/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:21:14.937 [206/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:21:15.195 [207/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:21:15.195 [208/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:21:15.195 [209/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:21:15.195 [210/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:21:15.195 [211/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:21:15.195 [212/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:21:15.195 [213/265] Linking static target drivers/librte_bus_pci.a 00:21:15.195 [214/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:21:15.195 [215/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:21:15.195 [216/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:21:15.454 [217/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:21:15.454 [218/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:21:15.454 [219/265] Linking static target drivers/librte_bus_vdev.a 00:21:15.454 [220/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:21:15.454 [221/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:21:15.454 [222/265] Linking static target drivers/librte_mempool_ring.a 00:21:15.454 [223/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:21:15.712 [224/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:21:15.712 [225/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:21:16.696 [226/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:21:16.696 [227/265] Linking static target lib/librte_vhost.a 00:21:18.072 [228/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:21:18.332 [229/265] Linking target lib/librte_eal.so.24.0 00:21:18.332 [230/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:21:18.332 [231/265] Linking target lib/librte_pci.so.24.0 00:21:18.332 [232/265] Linking target lib/librte_ring.so.24.0 00:21:18.332 [233/265] Linking target lib/librte_dmadev.so.24.0 00:21:18.332 [234/265] Linking target lib/librte_meter.so.24.0 00:21:18.332 [235/265] Linking target drivers/librte_bus_vdev.so.24.0 00:21:18.332 [236/265] Linking target lib/librte_timer.so.24.0 00:21:18.596 [237/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:21:18.596 [238/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:21:18.596 [239/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:21:18.596 [240/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:21:18.596 [241/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:21:18.596 [242/265] Linking target drivers/librte_bus_pci.so.24.0 00:21:18.596 [243/265] Linking target lib/librte_mempool.so.24.0 00:21:18.596 [244/265] Linking target lib/librte_rcu.so.24.0 00:21:18.596 [245/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:21:18.596 [246/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:21:18.862 [247/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:21:18.862 [248/265] Linking target drivers/librte_mempool_ring.so.24.0 00:21:18.862 [249/265] Linking target lib/librte_mbuf.so.24.0 00:21:18.862 [250/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:21:18.862 [251/265] Linking target lib/librte_net.so.24.0 00:21:18.862 [252/265] Linking target lib/librte_compressdev.so.24.0 00:21:18.862 [253/265] Linking target lib/librte_reorder.so.24.0 00:21:18.862 [254/265] Linking target lib/librte_cryptodev.so.24.0 00:21:19.130 [255/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:21:19.130 [256/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:21:19.130 [257/265] Linking target lib/librte_cmdline.so.24.0 00:21:19.130 [258/265] Linking target lib/librte_hash.so.24.0 00:21:19.130 [259/265] Linking target lib/librte_security.so.24.0 00:21:19.401 [260/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:21:19.674 [261/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:21:19.674 [262/265] Linking target lib/librte_ethdev.so.24.0 00:21:19.948 [263/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:21:19.948 [264/265] Linking target lib/librte_power.so.24.0 00:21:19.948 [265/265] Linking target lib/librte_vhost.so.24.0 00:21:19.948 INFO: autodetecting backend as ninja 00:21:19.948 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:21:20.902 CC lib/log/log.o 00:21:20.902 CC lib/log/log_deprecated.o 00:21:20.902 CC lib/log/log_flags.o 00:21:20.902 CC lib/ut_mock/mock.o 00:21:20.902 CC lib/ut/ut.o 00:21:21.161 LIB libspdk_ut_mock.a 00:21:21.161 LIB libspdk_log.a 00:21:21.161 LIB libspdk_ut.a 00:21:21.161 SO libspdk_ut_mock.so.5.0 00:21:21.161 SO libspdk_log.so.6.1 00:21:21.161 SO libspdk_ut.so.1.0 00:21:21.161 SYMLINK libspdk_ut_mock.so 00:21:21.161 SYMLINK libspdk_log.so 00:21:21.161 SYMLINK libspdk_ut.so 00:21:21.457 CC lib/util/base64.o 00:21:21.457 CC lib/util/cpuset.o 00:21:21.457 CC lib/util/bit_array.o 00:21:21.457 CC lib/util/crc32.o 00:21:21.457 CC lib/util/crc16.o 00:21:21.457 CC lib/util/crc32c.o 00:21:21.457 CC lib/ioat/ioat.o 00:21:21.457 CXX lib/trace_parser/trace.o 00:21:21.457 CC lib/dma/dma.o 00:21:21.457 CC lib/vfio_user/host/vfio_user_pci.o 00:21:21.457 CC lib/util/crc32_ieee.o 00:21:21.457 CC lib/util/crc64.o 00:21:21.457 CC lib/vfio_user/host/vfio_user.o 00:21:21.457 CC lib/util/dif.o 00:21:21.457 CC lib/util/fd.o 00:21:21.715 LIB libspdk_dma.a 00:21:21.715 SO libspdk_dma.so.3.0 00:21:21.715 CC lib/util/file.o 00:21:21.715 LIB libspdk_ioat.a 00:21:21.715 CC lib/util/hexlify.o 00:21:21.715 CC lib/util/iov.o 00:21:21.715 SYMLINK libspdk_dma.so 00:21:21.715 CC lib/util/math.o 00:21:21.715 SO libspdk_ioat.so.6.0 00:21:21.715 CC lib/util/pipe.o 00:21:21.715 CC lib/util/strerror_tls.o 00:21:21.715 LIB libspdk_vfio_user.a 00:21:21.715 SYMLINK libspdk_ioat.so 00:21:21.715 CC lib/util/string.o 00:21:21.715 CC lib/util/uuid.o 00:21:21.715 SO libspdk_vfio_user.so.4.0 00:21:21.715 CC lib/util/fd_group.o 00:21:21.715 SYMLINK libspdk_vfio_user.so 00:21:21.715 CC lib/util/xor.o 00:21:21.715 CC lib/util/zipf.o 00:21:21.973 LIB libspdk_util.a 00:21:22.231 SO libspdk_util.so.8.0 00:21:22.231 LIB libspdk_trace_parser.a 00:21:22.231 SO libspdk_trace_parser.so.4.0 00:21:22.232 SYMLINK libspdk_util.so 00:21:22.490 SYMLINK libspdk_trace_parser.so 00:21:22.490 CC lib/json/json_parse.o 00:21:22.490 CC lib/json/json_util.o 00:21:22.490 CC lib/json/json_write.o 00:21:22.490 CC lib/env_dpdk/env.o 00:21:22.490 CC lib/env_dpdk/memory.o 00:21:22.490 CC lib/env_dpdk/pci.o 00:21:22.490 CC lib/conf/conf.o 00:21:22.490 CC lib/vmd/vmd.o 00:21:22.490 CC lib/idxd/idxd.o 00:21:22.490 CC lib/rdma/common.o 00:21:22.748 LIB libspdk_conf.a 00:21:22.748 CC lib/rdma/rdma_verbs.o 00:21:22.748 CC lib/vmd/led.o 00:21:22.748 SO libspdk_conf.so.5.0 00:21:22.748 LIB libspdk_json.a 00:21:22.748 SO libspdk_json.so.5.1 00:21:22.748 SYMLINK libspdk_conf.so 00:21:22.748 CC lib/idxd/idxd_user.o 00:21:22.748 CC lib/env_dpdk/init.o 00:21:22.748 CC lib/env_dpdk/threads.o 00:21:22.748 SYMLINK libspdk_json.so 00:21:22.748 CC lib/env_dpdk/pci_ioat.o 00:21:22.748 CC lib/env_dpdk/pci_virtio.o 00:21:22.748 LIB libspdk_rdma.a 00:21:22.748 SO libspdk_rdma.so.5.0 00:21:23.006 CC lib/env_dpdk/pci_vmd.o 00:21:23.006 CC lib/env_dpdk/pci_idxd.o 00:21:23.006 CC lib/env_dpdk/pci_event.o 00:21:23.006 CC lib/env_dpdk/sigbus_handler.o 00:21:23.006 SYMLINK libspdk_rdma.so 00:21:23.006 LIB libspdk_idxd.a 00:21:23.006 SO libspdk_idxd.so.11.0 00:21:23.006 CC lib/env_dpdk/pci_dpdk.o 00:21:23.006 CC lib/env_dpdk/pci_dpdk_2207.o 00:21:23.006 LIB libspdk_vmd.a 00:21:23.006 CC lib/jsonrpc/jsonrpc_server.o 00:21:23.006 CC lib/env_dpdk/pci_dpdk_2211.o 00:21:23.006 SYMLINK libspdk_idxd.so 00:21:23.006 SO libspdk_vmd.so.5.0 00:21:23.006 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:21:23.006 CC lib/jsonrpc/jsonrpc_client.o 00:21:23.006 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:21:23.006 SYMLINK libspdk_vmd.so 00:21:23.264 LIB libspdk_jsonrpc.a 00:21:23.264 SO libspdk_jsonrpc.so.5.1 00:21:23.522 SYMLINK libspdk_jsonrpc.so 00:21:23.780 LIB libspdk_env_dpdk.a 00:21:23.780 CC lib/rpc/rpc.o 00:21:23.780 SO libspdk_env_dpdk.so.13.0 00:21:24.038 SYMLINK libspdk_env_dpdk.so 00:21:24.038 LIB libspdk_rpc.a 00:21:24.038 SO libspdk_rpc.so.5.0 00:21:24.038 SYMLINK libspdk_rpc.so 00:21:24.298 CC lib/notify/notify.o 00:21:24.298 CC lib/notify/notify_rpc.o 00:21:24.298 CC lib/trace/trace.o 00:21:24.298 CC lib/sock/sock.o 00:21:24.298 CC lib/sock/sock_rpc.o 00:21:24.298 CC lib/trace/trace_rpc.o 00:21:24.298 CC lib/trace/trace_flags.o 00:21:24.557 LIB libspdk_notify.a 00:21:24.557 SO libspdk_notify.so.5.0 00:21:24.557 LIB libspdk_trace.a 00:21:24.557 SYMLINK libspdk_notify.so 00:21:24.557 LIB libspdk_sock.a 00:21:24.557 SO libspdk_trace.so.9.0 00:21:24.815 SO libspdk_sock.so.8.0 00:21:24.815 SYMLINK libspdk_trace.so 00:21:24.815 SYMLINK libspdk_sock.so 00:21:25.073 CC lib/thread/thread.o 00:21:25.073 CC lib/thread/iobuf.o 00:21:25.073 CC lib/nvme/nvme_ctrlr_cmd.o 00:21:25.073 CC lib/nvme/nvme_ctrlr.o 00:21:25.073 CC lib/nvme/nvme_fabric.o 00:21:25.073 CC lib/nvme/nvme_ns_cmd.o 00:21:25.073 CC lib/nvme/nvme_ns.o 00:21:25.073 CC lib/nvme/nvme_qpair.o 00:21:25.073 CC lib/nvme/nvme_pcie.o 00:21:25.073 CC lib/nvme/nvme_pcie_common.o 00:21:25.330 CC lib/nvme/nvme.o 00:21:25.587 CC lib/nvme/nvme_quirks.o 00:21:25.587 CC lib/nvme/nvme_transport.o 00:21:25.844 CC lib/nvme/nvme_discovery.o 00:21:25.844 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:21:25.844 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:21:25.844 CC lib/nvme/nvme_tcp.o 00:21:26.101 CC lib/nvme/nvme_opal.o 00:21:26.101 CC lib/nvme/nvme_io_msg.o 00:21:26.101 CC lib/nvme/nvme_poll_group.o 00:21:26.379 CC lib/nvme/nvme_zns.o 00:21:26.379 LIB libspdk_thread.a 00:21:26.379 CC lib/nvme/nvme_cuse.o 00:21:26.379 SO libspdk_thread.so.9.0 00:21:26.379 CC lib/nvme/nvme_vfio_user.o 00:21:26.379 CC lib/nvme/nvme_rdma.o 00:21:26.379 SYMLINK libspdk_thread.so 00:21:26.637 CC lib/accel/accel.o 00:21:26.637 CC lib/blob/blobstore.o 00:21:26.895 CC lib/init/json_config.o 00:21:26.895 CC lib/virtio/virtio.o 00:21:26.895 CC lib/init/subsystem.o 00:21:26.895 CC lib/init/subsystem_rpc.o 00:21:26.895 CC lib/init/rpc.o 00:21:26.895 CC lib/virtio/virtio_vhost_user.o 00:21:27.154 CC lib/virtio/virtio_vfio_user.o 00:21:27.154 LIB libspdk_init.a 00:21:27.154 CC lib/virtio/virtio_pci.o 00:21:27.154 SO libspdk_init.so.4.0 00:21:27.154 CC lib/accel/accel_rpc.o 00:21:27.154 CC lib/blob/request.o 00:21:27.154 CC lib/vfu_tgt/tgt_endpoint.o 00:21:27.154 SYMLINK libspdk_init.so 00:21:27.154 CC lib/vfu_tgt/tgt_rpc.o 00:21:27.412 CC lib/blob/zeroes.o 00:21:27.412 CC lib/accel/accel_sw.o 00:21:27.412 CC lib/blob/blob_bs_dev.o 00:21:27.412 CC lib/event/app.o 00:21:27.412 LIB libspdk_virtio.a 00:21:27.412 SO libspdk_virtio.so.6.0 00:21:27.412 CC lib/event/reactor.o 00:21:27.412 CC lib/event/log_rpc.o 00:21:27.412 CC lib/event/app_rpc.o 00:21:27.412 LIB libspdk_vfu_tgt.a 00:21:27.412 SYMLINK libspdk_virtio.so 00:21:27.412 CC lib/event/scheduler_static.o 00:21:27.412 SO libspdk_vfu_tgt.so.2.0 00:21:27.671 SYMLINK libspdk_vfu_tgt.so 00:21:27.671 LIB libspdk_nvme.a 00:21:27.671 LIB libspdk_accel.a 00:21:27.671 SO libspdk_accel.so.14.0 00:21:27.671 SYMLINK libspdk_accel.so 00:21:27.671 SO libspdk_nvme.so.12.0 00:21:27.671 LIB libspdk_event.a 00:21:27.930 SO libspdk_event.so.12.0 00:21:27.930 CC lib/bdev/bdev.o 00:21:27.930 CC lib/bdev/part.o 00:21:27.930 CC lib/bdev/bdev_rpc.o 00:21:27.930 CC lib/bdev/scsi_nvme.o 00:21:27.930 CC lib/bdev/bdev_zone.o 00:21:27.930 SYMLINK libspdk_event.so 00:21:27.930 SYMLINK libspdk_nvme.so 00:21:29.307 LIB libspdk_blob.a 00:21:29.307 SO libspdk_blob.so.10.1 00:21:29.307 SYMLINK libspdk_blob.so 00:21:29.565 CC lib/blobfs/blobfs.o 00:21:29.565 CC lib/blobfs/tree.o 00:21:29.565 CC lib/lvol/lvol.o 00:21:30.133 LIB libspdk_bdev.a 00:21:30.133 SO libspdk_bdev.so.14.0 00:21:30.133 LIB libspdk_blobfs.a 00:21:30.392 SO libspdk_blobfs.so.9.0 00:21:30.392 SYMLINK libspdk_bdev.so 00:21:30.392 SYMLINK libspdk_blobfs.so 00:21:30.392 LIB libspdk_lvol.a 00:21:30.392 SO libspdk_lvol.so.9.1 00:21:30.392 CC lib/nvmf/ctrlr_bdev.o 00:21:30.392 CC lib/nvmf/ctrlr.o 00:21:30.392 CC lib/ublk/ublk.o 00:21:30.392 CC lib/nvmf/subsystem.o 00:21:30.392 CC lib/nvmf/ctrlr_discovery.o 00:21:30.392 CC lib/ublk/ublk_rpc.o 00:21:30.392 SYMLINK libspdk_lvol.so 00:21:30.392 CC lib/ftl/ftl_core.o 00:21:30.392 CC lib/ftl/ftl_init.o 00:21:30.392 CC lib/nbd/nbd.o 00:21:30.392 CC lib/scsi/dev.o 00:21:30.650 CC lib/nvmf/nvmf.o 00:21:30.650 CC lib/nbd/nbd_rpc.o 00:21:30.650 CC lib/scsi/lun.o 00:21:30.909 CC lib/ftl/ftl_layout.o 00:21:30.909 CC lib/ftl/ftl_debug.o 00:21:30.909 LIB libspdk_nbd.a 00:21:30.909 SO libspdk_nbd.so.6.0 00:21:30.909 CC lib/ftl/ftl_io.o 00:21:30.909 SYMLINK libspdk_nbd.so 00:21:30.909 CC lib/ftl/ftl_sb.o 00:21:30.909 CC lib/scsi/port.o 00:21:31.167 LIB libspdk_ublk.a 00:21:31.167 CC lib/ftl/ftl_l2p.o 00:21:31.167 SO libspdk_ublk.so.2.0 00:21:31.167 CC lib/nvmf/nvmf_rpc.o 00:21:31.167 CC lib/nvmf/transport.o 00:21:31.167 CC lib/scsi/scsi.o 00:21:31.167 CC lib/scsi/scsi_bdev.o 00:21:31.167 SYMLINK libspdk_ublk.so 00:21:31.167 CC lib/scsi/scsi_pr.o 00:21:31.167 CC lib/ftl/ftl_l2p_flat.o 00:21:31.167 CC lib/nvmf/tcp.o 00:21:31.424 CC lib/nvmf/vfio_user.o 00:21:31.424 CC lib/ftl/ftl_nv_cache.o 00:21:31.424 CC lib/nvmf/rdma.o 00:21:31.424 CC lib/ftl/ftl_band.o 00:21:31.424 CC lib/scsi/scsi_rpc.o 00:21:31.683 CC lib/scsi/task.o 00:21:31.683 CC lib/ftl/ftl_band_ops.o 00:21:31.683 CC lib/ftl/ftl_writer.o 00:21:31.941 LIB libspdk_scsi.a 00:21:31.941 CC lib/ftl/ftl_rq.o 00:21:31.941 CC lib/ftl/ftl_reloc.o 00:21:31.941 SO libspdk_scsi.so.8.0 00:21:31.942 CC lib/ftl/ftl_l2p_cache.o 00:21:31.942 SYMLINK libspdk_scsi.so 00:21:31.942 CC lib/ftl/ftl_p2l.o 00:21:31.942 CC lib/ftl/mngt/ftl_mngt.o 00:21:31.942 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:21:31.942 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:21:32.200 CC lib/ftl/mngt/ftl_mngt_startup.o 00:21:32.200 CC lib/ftl/mngt/ftl_mngt_md.o 00:21:32.200 CC lib/ftl/mngt/ftl_mngt_misc.o 00:21:32.459 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:21:32.459 CC lib/vhost/vhost.o 00:21:32.459 CC lib/iscsi/conn.o 00:21:32.459 CC lib/iscsi/init_grp.o 00:21:32.459 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:21:32.459 CC lib/vhost/vhost_rpc.o 00:21:32.459 CC lib/vhost/vhost_scsi.o 00:21:32.459 CC lib/vhost/vhost_blk.o 00:21:32.717 CC lib/iscsi/iscsi.o 00:21:32.717 CC lib/ftl/mngt/ftl_mngt_band.o 00:21:32.717 CC lib/vhost/rte_vhost_user.o 00:21:32.717 CC lib/iscsi/md5.o 00:21:32.976 CC lib/iscsi/param.o 00:21:32.976 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:21:32.976 CC lib/iscsi/portal_grp.o 00:21:32.976 CC lib/iscsi/tgt_node.o 00:21:33.235 CC lib/iscsi/iscsi_subsystem.o 00:21:33.235 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:21:33.235 CC lib/iscsi/iscsi_rpc.o 00:21:33.235 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:21:33.493 CC lib/iscsi/task.o 00:21:33.493 LIB libspdk_nvmf.a 00:21:33.493 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:21:33.493 CC lib/ftl/utils/ftl_conf.o 00:21:33.493 CC lib/ftl/utils/ftl_md.o 00:21:33.493 SO libspdk_nvmf.so.17.0 00:21:33.493 CC lib/ftl/utils/ftl_mempool.o 00:21:33.493 CC lib/ftl/utils/ftl_bitmap.o 00:21:33.493 CC lib/ftl/utils/ftl_property.o 00:21:33.493 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:21:33.493 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:21:33.752 SYMLINK libspdk_nvmf.so 00:21:33.752 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:21:33.752 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:21:33.752 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:21:33.752 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:21:33.752 LIB libspdk_vhost.a 00:21:33.752 SO libspdk_vhost.so.7.1 00:21:33.752 CC lib/ftl/upgrade/ftl_sb_v3.o 00:21:33.752 CC lib/ftl/upgrade/ftl_sb_v5.o 00:21:33.752 CC lib/ftl/nvc/ftl_nvc_dev.o 00:21:34.009 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:21:34.009 CC lib/ftl/base/ftl_base_dev.o 00:21:34.009 CC lib/ftl/base/ftl_base_bdev.o 00:21:34.009 LIB libspdk_iscsi.a 00:21:34.009 CC lib/ftl/ftl_trace.o 00:21:34.010 SYMLINK libspdk_vhost.so 00:21:34.010 SO libspdk_iscsi.so.7.0 00:21:34.267 SYMLINK libspdk_iscsi.so 00:21:34.267 LIB libspdk_ftl.a 00:21:34.525 SO libspdk_ftl.so.8.0 00:21:34.783 SYMLINK libspdk_ftl.so 00:21:35.041 CC module/vfu_device/vfu_virtio.o 00:21:35.041 CC module/env_dpdk/env_dpdk_rpc.o 00:21:35.041 CC module/blob/bdev/blob_bdev.o 00:21:35.041 CC module/sock/posix/posix.o 00:21:35.041 CC module/sock/uring/uring.o 00:21:35.041 CC module/accel/iaa/accel_iaa.o 00:21:35.041 CC module/accel/error/accel_error.o 00:21:35.041 CC module/accel/dsa/accel_dsa.o 00:21:35.041 CC module/accel/ioat/accel_ioat.o 00:21:35.041 CC module/scheduler/dynamic/scheduler_dynamic.o 00:21:35.041 LIB libspdk_env_dpdk_rpc.a 00:21:35.041 SO libspdk_env_dpdk_rpc.so.5.0 00:21:35.299 SYMLINK libspdk_env_dpdk_rpc.so 00:21:35.299 CC module/vfu_device/vfu_virtio_blk.o 00:21:35.299 CC module/accel/error/accel_error_rpc.o 00:21:35.299 CC module/accel/ioat/accel_ioat_rpc.o 00:21:35.299 CC module/accel/iaa/accel_iaa_rpc.o 00:21:35.299 LIB libspdk_scheduler_dynamic.a 00:21:35.299 SO libspdk_scheduler_dynamic.so.3.0 00:21:35.299 CC module/accel/dsa/accel_dsa_rpc.o 00:21:35.299 LIB libspdk_blob_bdev.a 00:21:35.299 SO libspdk_blob_bdev.so.10.1 00:21:35.299 SYMLINK libspdk_scheduler_dynamic.so 00:21:35.299 LIB libspdk_accel_error.a 00:21:35.299 LIB libspdk_accel_iaa.a 00:21:35.299 LIB libspdk_accel_ioat.a 00:21:35.299 SYMLINK libspdk_blob_bdev.so 00:21:35.299 SO libspdk_accel_error.so.1.0 00:21:35.299 CC module/vfu_device/vfu_virtio_scsi.o 00:21:35.299 SO libspdk_accel_iaa.so.2.0 00:21:35.299 SO libspdk_accel_ioat.so.5.0 00:21:35.299 LIB libspdk_accel_dsa.a 00:21:35.557 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:21:35.557 SO libspdk_accel_dsa.so.4.0 00:21:35.557 SYMLINK libspdk_accel_error.so 00:21:35.557 SYMLINK libspdk_accel_iaa.so 00:21:35.557 SYMLINK libspdk_accel_ioat.so 00:21:35.557 CC module/vfu_device/vfu_virtio_rpc.o 00:21:35.557 SYMLINK libspdk_accel_dsa.so 00:21:35.557 CC module/scheduler/gscheduler/gscheduler.o 00:21:35.557 LIB libspdk_scheduler_dpdk_governor.a 00:21:35.557 SO libspdk_scheduler_dpdk_governor.so.3.0 00:21:35.557 CC module/blobfs/bdev/blobfs_bdev.o 00:21:35.557 CC module/bdev/delay/vbdev_delay.o 00:21:35.557 CC module/bdev/error/vbdev_error.o 00:21:35.557 SYMLINK libspdk_scheduler_dpdk_governor.so 00:21:35.557 LIB libspdk_sock_uring.a 00:21:35.557 LIB libspdk_sock_posix.a 00:21:35.816 LIB libspdk_vfu_device.a 00:21:35.816 SO libspdk_sock_uring.so.4.0 00:21:35.816 LIB libspdk_scheduler_gscheduler.a 00:21:35.816 SO libspdk_sock_posix.so.5.0 00:21:35.816 CC module/bdev/lvol/vbdev_lvol.o 00:21:35.816 CC module/bdev/gpt/gpt.o 00:21:35.816 CC module/bdev/malloc/bdev_malloc.o 00:21:35.816 SO libspdk_vfu_device.so.2.0 00:21:35.816 SO libspdk_scheduler_gscheduler.so.3.0 00:21:35.816 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:21:35.816 SYMLINK libspdk_sock_uring.so 00:21:35.816 SYMLINK libspdk_sock_posix.so 00:21:35.816 SYMLINK libspdk_scheduler_gscheduler.so 00:21:35.816 SYMLINK libspdk_vfu_device.so 00:21:35.816 CC module/bdev/error/vbdev_error_rpc.o 00:21:35.816 CC module/bdev/null/bdev_null.o 00:21:35.816 CC module/bdev/nvme/bdev_nvme.o 00:21:35.816 CC module/bdev/passthru/vbdev_passthru.o 00:21:35.816 CC module/bdev/gpt/vbdev_gpt.o 00:21:35.816 CC module/bdev/raid/bdev_raid.o 00:21:35.816 LIB libspdk_blobfs_bdev.a 00:21:35.816 CC module/bdev/delay/vbdev_delay_rpc.o 00:21:36.074 SO libspdk_blobfs_bdev.so.5.0 00:21:36.074 LIB libspdk_bdev_error.a 00:21:36.074 SYMLINK libspdk_blobfs_bdev.so 00:21:36.074 CC module/bdev/raid/bdev_raid_rpc.o 00:21:36.074 SO libspdk_bdev_error.so.5.0 00:21:36.074 CC module/bdev/malloc/bdev_malloc_rpc.o 00:21:36.074 LIB libspdk_bdev_delay.a 00:21:36.074 CC module/bdev/null/bdev_null_rpc.o 00:21:36.074 SYMLINK libspdk_bdev_error.so 00:21:36.074 CC module/bdev/raid/bdev_raid_sb.o 00:21:36.074 SO libspdk_bdev_delay.so.5.0 00:21:36.074 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:21:36.332 LIB libspdk_bdev_gpt.a 00:21:36.332 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:21:36.332 LIB libspdk_bdev_malloc.a 00:21:36.332 SO libspdk_bdev_gpt.so.5.0 00:21:36.332 SYMLINK libspdk_bdev_delay.so 00:21:36.332 CC module/bdev/raid/raid0.o 00:21:36.332 SO libspdk_bdev_malloc.so.5.0 00:21:36.332 LIB libspdk_bdev_null.a 00:21:36.332 SYMLINK libspdk_bdev_gpt.so 00:21:36.332 CC module/bdev/raid/raid1.o 00:21:36.332 SO libspdk_bdev_null.so.5.0 00:21:36.332 SYMLINK libspdk_bdev_malloc.so 00:21:36.332 CC module/bdev/raid/concat.o 00:21:36.332 LIB libspdk_bdev_passthru.a 00:21:36.332 SYMLINK libspdk_bdev_null.so 00:21:36.332 SO libspdk_bdev_passthru.so.5.0 00:21:36.332 CC module/bdev/split/vbdev_split.o 00:21:36.591 CC module/bdev/zone_block/vbdev_zone_block.o 00:21:36.591 SYMLINK libspdk_bdev_passthru.so 00:21:36.591 CC module/bdev/uring/bdev_uring.o 00:21:36.591 CC module/bdev/nvme/bdev_nvme_rpc.o 00:21:36.591 LIB libspdk_bdev_lvol.a 00:21:36.591 CC module/bdev/nvme/nvme_rpc.o 00:21:36.591 SO libspdk_bdev_lvol.so.5.0 00:21:36.591 CC module/bdev/aio/bdev_aio.o 00:21:36.591 CC module/bdev/nvme/bdev_mdns_client.o 00:21:36.591 SYMLINK libspdk_bdev_lvol.so 00:21:36.591 CC module/bdev/nvme/vbdev_opal.o 00:21:36.591 CC module/bdev/split/vbdev_split_rpc.o 00:21:36.849 LIB libspdk_bdev_raid.a 00:21:36.849 CC module/bdev/nvme/vbdev_opal_rpc.o 00:21:36.849 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:21:36.849 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:21:36.849 SO libspdk_bdev_raid.so.5.0 00:21:36.849 CC module/bdev/uring/bdev_uring_rpc.o 00:21:36.849 LIB libspdk_bdev_split.a 00:21:36.849 SO libspdk_bdev_split.so.5.0 00:21:36.849 SYMLINK libspdk_bdev_raid.so 00:21:36.849 CC module/bdev/aio/bdev_aio_rpc.o 00:21:36.849 SYMLINK libspdk_bdev_split.so 00:21:36.849 LIB libspdk_bdev_zone_block.a 00:21:36.849 SO libspdk_bdev_zone_block.so.5.0 00:21:37.107 LIB libspdk_bdev_uring.a 00:21:37.107 CC module/bdev/ftl/bdev_ftl.o 00:21:37.107 CC module/bdev/ftl/bdev_ftl_rpc.o 00:21:37.107 CC module/bdev/iscsi/bdev_iscsi.o 00:21:37.107 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:21:37.107 SO libspdk_bdev_uring.so.5.0 00:21:37.107 LIB libspdk_bdev_aio.a 00:21:37.107 SYMLINK libspdk_bdev_zone_block.so 00:21:37.107 CC module/bdev/virtio/bdev_virtio_scsi.o 00:21:37.107 CC module/bdev/virtio/bdev_virtio_blk.o 00:21:37.107 CC module/bdev/virtio/bdev_virtio_rpc.o 00:21:37.107 SO libspdk_bdev_aio.so.5.0 00:21:37.107 SYMLINK libspdk_bdev_uring.so 00:21:37.107 SYMLINK libspdk_bdev_aio.so 00:21:37.366 LIB libspdk_bdev_ftl.a 00:21:37.366 SO libspdk_bdev_ftl.so.5.0 00:21:37.366 LIB libspdk_bdev_iscsi.a 00:21:37.366 SO libspdk_bdev_iscsi.so.5.0 00:21:37.366 SYMLINK libspdk_bdev_ftl.so 00:21:37.366 SYMLINK libspdk_bdev_iscsi.so 00:21:37.366 LIB libspdk_bdev_virtio.a 00:21:37.624 SO libspdk_bdev_virtio.so.5.0 00:21:37.624 SYMLINK libspdk_bdev_virtio.so 00:21:37.881 LIB libspdk_bdev_nvme.a 00:21:37.881 SO libspdk_bdev_nvme.so.6.0 00:21:38.139 SYMLINK libspdk_bdev_nvme.so 00:21:38.398 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:21:38.398 CC module/event/subsystems/sock/sock.o 00:21:38.398 CC module/event/subsystems/scheduler/scheduler.o 00:21:38.398 CC module/event/subsystems/vmd/vmd.o 00:21:38.398 CC module/event/subsystems/vmd/vmd_rpc.o 00:21:38.398 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:21:38.398 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:21:38.398 CC module/event/subsystems/iobuf/iobuf.o 00:21:38.654 LIB libspdk_event_sock.a 00:21:38.654 LIB libspdk_event_vhost_blk.a 00:21:38.654 LIB libspdk_event_scheduler.a 00:21:38.654 LIB libspdk_event_vmd.a 00:21:38.654 LIB libspdk_event_vfu_tgt.a 00:21:38.654 LIB libspdk_event_iobuf.a 00:21:38.654 SO libspdk_event_vhost_blk.so.2.0 00:21:38.654 SO libspdk_event_scheduler.so.3.0 00:21:38.654 SO libspdk_event_sock.so.4.0 00:21:38.654 SO libspdk_event_vmd.so.5.0 00:21:38.654 SO libspdk_event_vfu_tgt.so.2.0 00:21:38.655 SO libspdk_event_iobuf.so.2.0 00:21:38.655 SYMLINK libspdk_event_vhost_blk.so 00:21:38.655 SYMLINK libspdk_event_sock.so 00:21:38.655 SYMLINK libspdk_event_vfu_tgt.so 00:21:38.655 SYMLINK libspdk_event_vmd.so 00:21:38.655 SYMLINK libspdk_event_scheduler.so 00:21:38.655 SYMLINK libspdk_event_iobuf.so 00:21:38.912 CC module/event/subsystems/accel/accel.o 00:21:39.169 LIB libspdk_event_accel.a 00:21:39.169 SO libspdk_event_accel.so.5.0 00:21:39.169 SYMLINK libspdk_event_accel.so 00:21:39.426 CC module/event/subsystems/bdev/bdev.o 00:21:39.683 LIB libspdk_event_bdev.a 00:21:39.683 SO libspdk_event_bdev.so.5.0 00:21:39.683 SYMLINK libspdk_event_bdev.so 00:21:39.941 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:21:39.941 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:21:39.941 CC module/event/subsystems/ublk/ublk.o 00:21:39.941 CC module/event/subsystems/scsi/scsi.o 00:21:39.941 CC module/event/subsystems/nbd/nbd.o 00:21:40.200 LIB libspdk_event_nbd.a 00:21:40.200 LIB libspdk_event_ublk.a 00:21:40.200 LIB libspdk_event_scsi.a 00:21:40.200 SO libspdk_event_ublk.so.2.0 00:21:40.200 SO libspdk_event_nbd.so.5.0 00:21:40.200 SO libspdk_event_scsi.so.5.0 00:21:40.200 LIB libspdk_event_nvmf.a 00:21:40.200 SYMLINK libspdk_event_nbd.so 00:21:40.200 SYMLINK libspdk_event_ublk.so 00:21:40.200 SYMLINK libspdk_event_scsi.so 00:21:40.200 SO libspdk_event_nvmf.so.5.0 00:21:40.200 SYMLINK libspdk_event_nvmf.so 00:21:40.459 CC module/event/subsystems/iscsi/iscsi.o 00:21:40.459 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:21:40.718 LIB libspdk_event_vhost_scsi.a 00:21:40.718 LIB libspdk_event_iscsi.a 00:21:40.719 SO libspdk_event_vhost_scsi.so.2.0 00:21:40.719 SO libspdk_event_iscsi.so.5.0 00:21:40.719 SYMLINK libspdk_event_vhost_scsi.so 00:21:40.719 SYMLINK libspdk_event_iscsi.so 00:21:40.978 SO libspdk.so.5.0 00:21:40.978 SYMLINK libspdk.so 00:21:40.978 TEST_HEADER include/spdk/accel.h 00:21:40.978 TEST_HEADER include/spdk/accel_module.h 00:21:40.978 CXX app/trace/trace.o 00:21:40.978 TEST_HEADER include/spdk/assert.h 00:21:40.978 TEST_HEADER include/spdk/barrier.h 00:21:40.978 TEST_HEADER include/spdk/base64.h 00:21:40.978 TEST_HEADER include/spdk/bdev.h 00:21:40.978 TEST_HEADER include/spdk/bdev_module.h 00:21:40.978 TEST_HEADER include/spdk/bdev_zone.h 00:21:40.978 TEST_HEADER include/spdk/bit_array.h 00:21:40.978 TEST_HEADER include/spdk/bit_pool.h 00:21:40.978 TEST_HEADER include/spdk/blob_bdev.h 00:21:40.978 TEST_HEADER include/spdk/blobfs_bdev.h 00:21:40.978 TEST_HEADER include/spdk/blobfs.h 00:21:40.978 TEST_HEADER include/spdk/blob.h 00:21:40.978 TEST_HEADER include/spdk/conf.h 00:21:40.978 TEST_HEADER include/spdk/config.h 00:21:41.236 TEST_HEADER include/spdk/cpuset.h 00:21:41.236 TEST_HEADER include/spdk/crc16.h 00:21:41.236 TEST_HEADER include/spdk/crc32.h 00:21:41.236 TEST_HEADER include/spdk/crc64.h 00:21:41.236 TEST_HEADER include/spdk/dif.h 00:21:41.236 TEST_HEADER include/spdk/dma.h 00:21:41.236 TEST_HEADER include/spdk/endian.h 00:21:41.236 TEST_HEADER include/spdk/env_dpdk.h 00:21:41.236 TEST_HEADER include/spdk/env.h 00:21:41.236 TEST_HEADER include/spdk/event.h 00:21:41.236 TEST_HEADER include/spdk/fd_group.h 00:21:41.236 TEST_HEADER include/spdk/fd.h 00:21:41.236 TEST_HEADER include/spdk/file.h 00:21:41.236 CC test/event/event_perf/event_perf.o 00:21:41.236 TEST_HEADER include/spdk/ftl.h 00:21:41.236 TEST_HEADER include/spdk/gpt_spec.h 00:21:41.236 TEST_HEADER include/spdk/hexlify.h 00:21:41.236 TEST_HEADER include/spdk/histogram_data.h 00:21:41.236 TEST_HEADER include/spdk/idxd.h 00:21:41.236 TEST_HEADER include/spdk/idxd_spec.h 00:21:41.236 TEST_HEADER include/spdk/init.h 00:21:41.236 CC examples/accel/perf/accel_perf.o 00:21:41.236 TEST_HEADER include/spdk/ioat.h 00:21:41.236 CC test/blobfs/mkfs/mkfs.o 00:21:41.236 TEST_HEADER include/spdk/ioat_spec.h 00:21:41.236 TEST_HEADER include/spdk/iscsi_spec.h 00:21:41.236 CC test/accel/dif/dif.o 00:21:41.237 TEST_HEADER include/spdk/json.h 00:21:41.237 TEST_HEADER include/spdk/jsonrpc.h 00:21:41.237 TEST_HEADER include/spdk/likely.h 00:21:41.237 CC test/bdev/bdevio/bdevio.o 00:21:41.237 TEST_HEADER include/spdk/log.h 00:21:41.237 TEST_HEADER include/spdk/lvol.h 00:21:41.237 TEST_HEADER include/spdk/memory.h 00:21:41.237 TEST_HEADER include/spdk/mmio.h 00:21:41.237 CC test/app/bdev_svc/bdev_svc.o 00:21:41.237 TEST_HEADER include/spdk/nbd.h 00:21:41.237 TEST_HEADER include/spdk/notify.h 00:21:41.237 TEST_HEADER include/spdk/nvme.h 00:21:41.237 TEST_HEADER include/spdk/nvme_intel.h 00:21:41.237 TEST_HEADER include/spdk/nvme_ocssd.h 00:21:41.237 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:21:41.237 CC test/dma/test_dma/test_dma.o 00:21:41.237 TEST_HEADER include/spdk/nvme_spec.h 00:21:41.237 TEST_HEADER include/spdk/nvme_zns.h 00:21:41.237 TEST_HEADER include/spdk/nvmf_cmd.h 00:21:41.237 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:21:41.237 TEST_HEADER include/spdk/nvmf.h 00:21:41.237 TEST_HEADER include/spdk/nvmf_spec.h 00:21:41.237 TEST_HEADER include/spdk/nvmf_transport.h 00:21:41.237 TEST_HEADER include/spdk/opal.h 00:21:41.237 TEST_HEADER include/spdk/opal_spec.h 00:21:41.237 TEST_HEADER include/spdk/pci_ids.h 00:21:41.237 TEST_HEADER include/spdk/pipe.h 00:21:41.237 TEST_HEADER include/spdk/queue.h 00:21:41.237 CC test/env/mem_callbacks/mem_callbacks.o 00:21:41.237 TEST_HEADER include/spdk/reduce.h 00:21:41.237 TEST_HEADER include/spdk/rpc.h 00:21:41.237 TEST_HEADER include/spdk/scheduler.h 00:21:41.237 TEST_HEADER include/spdk/scsi.h 00:21:41.237 TEST_HEADER include/spdk/scsi_spec.h 00:21:41.237 TEST_HEADER include/spdk/sock.h 00:21:41.237 TEST_HEADER include/spdk/stdinc.h 00:21:41.237 TEST_HEADER include/spdk/string.h 00:21:41.237 TEST_HEADER include/spdk/thread.h 00:21:41.237 TEST_HEADER include/spdk/trace.h 00:21:41.237 TEST_HEADER include/spdk/trace_parser.h 00:21:41.237 TEST_HEADER include/spdk/tree.h 00:21:41.237 TEST_HEADER include/spdk/ublk.h 00:21:41.237 TEST_HEADER include/spdk/util.h 00:21:41.237 TEST_HEADER include/spdk/uuid.h 00:21:41.237 TEST_HEADER include/spdk/version.h 00:21:41.237 TEST_HEADER include/spdk/vfio_user_pci.h 00:21:41.237 TEST_HEADER include/spdk/vfio_user_spec.h 00:21:41.237 TEST_HEADER include/spdk/vhost.h 00:21:41.237 TEST_HEADER include/spdk/vmd.h 00:21:41.237 TEST_HEADER include/spdk/xor.h 00:21:41.237 TEST_HEADER include/spdk/zipf.h 00:21:41.237 CXX test/cpp_headers/accel.o 00:21:41.237 LINK event_perf 00:21:41.517 LINK mkfs 00:21:41.517 LINK bdev_svc 00:21:41.517 CXX test/cpp_headers/accel_module.o 00:21:41.517 LINK dif 00:21:41.517 LINK spdk_trace 00:21:41.517 LINK bdevio 00:21:41.517 CC test/event/reactor/reactor.o 00:21:41.517 LINK accel_perf 00:21:41.517 LINK test_dma 00:21:41.517 CXX test/cpp_headers/assert.o 00:21:41.783 CC app/trace_record/trace_record.o 00:21:41.783 LINK reactor 00:21:41.783 LINK mem_callbacks 00:21:41.783 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:21:41.783 CXX test/cpp_headers/barrier.o 00:21:41.783 CC app/nvmf_tgt/nvmf_main.o 00:21:41.783 CC app/iscsi_tgt/iscsi_tgt.o 00:21:41.783 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:21:41.783 CC examples/bdev/hello_world/hello_bdev.o 00:21:41.783 CC examples/bdev/bdevperf/bdevperf.o 00:21:42.042 CXX test/cpp_headers/base64.o 00:21:42.042 LINK spdk_trace_record 00:21:42.042 CC test/env/vtophys/vtophys.o 00:21:42.042 CC test/event/reactor_perf/reactor_perf.o 00:21:42.042 LINK nvmf_tgt 00:21:42.042 LINK iscsi_tgt 00:21:42.042 CXX test/cpp_headers/bdev.o 00:21:42.042 LINK reactor_perf 00:21:42.042 LINK vtophys 00:21:42.042 LINK hello_bdev 00:21:42.042 LINK nvme_fuzz 00:21:42.345 CC test/event/app_repeat/app_repeat.o 00:21:42.345 CXX test/cpp_headers/bdev_module.o 00:21:42.345 CC test/event/scheduler/scheduler.o 00:21:42.346 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:21:42.346 LINK app_repeat 00:21:42.346 CC examples/blob/hello_world/hello_blob.o 00:21:42.346 CC examples/blob/cli/blobcli.o 00:21:42.346 CC test/env/memory/memory_ut.o 00:21:42.346 CC app/spdk_tgt/spdk_tgt.o 00:21:42.346 CXX test/cpp_headers/bdev_zone.o 00:21:42.605 LINK scheduler 00:21:42.605 LINK env_dpdk_post_init 00:21:42.605 LINK bdevperf 00:21:42.605 CC test/env/pci/pci_ut.o 00:21:42.605 LINK hello_blob 00:21:42.605 CXX test/cpp_headers/bit_array.o 00:21:42.605 LINK spdk_tgt 00:21:42.864 CXX test/cpp_headers/bit_pool.o 00:21:42.864 LINK blobcli 00:21:42.864 CC test/nvme/reset/reset.o 00:21:42.864 CC test/nvme/aer/aer.o 00:21:42.864 CC test/nvme/sgl/sgl.o 00:21:42.864 CC test/lvol/esnap/esnap.o 00:21:42.864 CXX test/cpp_headers/blob_bdev.o 00:21:42.864 CC app/spdk_lspci/spdk_lspci.o 00:21:43.122 LINK pci_ut 00:21:43.122 LINK spdk_lspci 00:21:43.122 CXX test/cpp_headers/blobfs_bdev.o 00:21:43.122 LINK reset 00:21:43.122 LINK sgl 00:21:43.122 LINK aer 00:21:43.122 CC examples/ioat/perf/perf.o 00:21:43.381 CC app/spdk_nvme_perf/perf.o 00:21:43.381 LINK memory_ut 00:21:43.381 CC examples/ioat/verify/verify.o 00:21:43.381 LINK iscsi_fuzz 00:21:43.381 CXX test/cpp_headers/blobfs.o 00:21:43.381 CC test/nvme/e2edp/nvme_dp.o 00:21:43.381 LINK ioat_perf 00:21:43.381 CC test/nvme/overhead/overhead.o 00:21:43.381 CC examples/nvme/hello_world/hello_world.o 00:21:43.640 CXX test/cpp_headers/blob.o 00:21:43.640 LINK verify 00:21:43.640 CXX test/cpp_headers/conf.o 00:21:43.640 CC app/spdk_nvme_identify/identify.o 00:21:43.640 LINK hello_world 00:21:43.640 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:21:43.640 LINK nvme_dp 00:21:43.640 LINK overhead 00:21:43.640 CXX test/cpp_headers/config.o 00:21:43.640 CC app/spdk_nvme_discover/discovery_aer.o 00:21:43.640 CC test/nvme/err_injection/err_injection.o 00:21:43.640 CXX test/cpp_headers/cpuset.o 00:21:43.899 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:21:43.899 CXX test/cpp_headers/crc16.o 00:21:43.899 CC examples/nvme/reconnect/reconnect.o 00:21:43.899 CC examples/nvme/nvme_manage/nvme_manage.o 00:21:43.899 LINK err_injection 00:21:43.899 LINK spdk_nvme_discover 00:21:43.899 CC test/app/histogram_perf/histogram_perf.o 00:21:43.899 CXX test/cpp_headers/crc32.o 00:21:44.157 LINK spdk_nvme_perf 00:21:44.158 LINK histogram_perf 00:21:44.158 CXX test/cpp_headers/crc64.o 00:21:44.158 CC test/nvme/startup/startup.o 00:21:44.158 CC app/spdk_top/spdk_top.o 00:21:44.158 LINK vhost_fuzz 00:21:44.158 LINK reconnect 00:21:44.417 CXX test/cpp_headers/dif.o 00:21:44.417 LINK spdk_nvme_identify 00:21:44.417 CC examples/nvme/arbitration/arbitration.o 00:21:44.417 LINK startup 00:21:44.417 CC test/app/jsoncat/jsoncat.o 00:21:44.417 LINK nvme_manage 00:21:44.417 CC test/app/stub/stub.o 00:21:44.417 CXX test/cpp_headers/dma.o 00:21:44.417 LINK jsoncat 00:21:44.417 CC examples/sock/hello_world/hello_sock.o 00:21:44.677 CC test/nvme/reserve/reserve.o 00:21:44.677 LINK stub 00:21:44.677 CC app/vhost/vhost.o 00:21:44.677 CC app/spdk_dd/spdk_dd.o 00:21:44.677 CXX test/cpp_headers/endian.o 00:21:44.677 LINK arbitration 00:21:44.677 CC app/fio/nvme/fio_plugin.o 00:21:44.677 LINK hello_sock 00:21:44.677 CXX test/cpp_headers/env_dpdk.o 00:21:44.677 LINK vhost 00:21:44.677 LINK reserve 00:21:44.936 CC app/fio/bdev/fio_plugin.o 00:21:44.936 CC examples/nvme/hotplug/hotplug.o 00:21:44.936 CXX test/cpp_headers/env.o 00:21:44.936 CC examples/nvme/cmb_copy/cmb_copy.o 00:21:44.936 LINK spdk_dd 00:21:44.936 LINK spdk_top 00:21:44.936 CC test/nvme/simple_copy/simple_copy.o 00:21:44.936 CC test/nvme/connect_stress/connect_stress.o 00:21:44.936 CXX test/cpp_headers/event.o 00:21:45.194 LINK hotplug 00:21:45.194 LINK cmb_copy 00:21:45.194 CXX test/cpp_headers/fd_group.o 00:21:45.194 CXX test/cpp_headers/fd.o 00:21:45.194 LINK spdk_nvme 00:21:45.194 LINK connect_stress 00:21:45.194 LINK simple_copy 00:21:45.194 CC examples/nvme/abort/abort.o 00:21:45.194 CC test/nvme/boot_partition/boot_partition.o 00:21:45.194 LINK spdk_bdev 00:21:45.194 CC test/nvme/compliance/nvme_compliance.o 00:21:45.194 CXX test/cpp_headers/file.o 00:21:45.453 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:21:45.453 CXX test/cpp_headers/ftl.o 00:21:45.453 CXX test/cpp_headers/gpt_spec.o 00:21:45.453 CC examples/vmd/lsvmd/lsvmd.o 00:21:45.453 LINK boot_partition 00:21:45.453 CC test/rpc_client/rpc_client_test.o 00:21:45.453 LINK pmr_persistence 00:21:45.453 CXX test/cpp_headers/hexlify.o 00:21:45.453 LINK lsvmd 00:21:45.453 CC test/thread/poller_perf/poller_perf.o 00:21:45.729 LINK nvme_compliance 00:21:45.729 LINK abort 00:21:45.729 LINK rpc_client_test 00:21:45.729 CXX test/cpp_headers/histogram_data.o 00:21:45.729 CC examples/nvmf/nvmf/nvmf.o 00:21:45.729 LINK poller_perf 00:21:45.729 CC examples/util/zipf/zipf.o 00:21:45.729 CC examples/vmd/led/led.o 00:21:45.729 CXX test/cpp_headers/idxd.o 00:21:45.729 CC test/nvme/fused_ordering/fused_ordering.o 00:21:46.009 CC examples/thread/thread/thread_ex.o 00:21:46.009 CXX test/cpp_headers/idxd_spec.o 00:21:46.009 CC examples/idxd/perf/perf.o 00:21:46.009 CC examples/interrupt_tgt/interrupt_tgt.o 00:21:46.009 LINK zipf 00:21:46.009 LINK led 00:21:46.009 LINK nvmf 00:21:46.009 CXX test/cpp_headers/init.o 00:21:46.009 LINK fused_ordering 00:21:46.009 LINK interrupt_tgt 00:21:46.009 CC test/nvme/doorbell_aers/doorbell_aers.o 00:21:46.009 CXX test/cpp_headers/ioat.o 00:21:46.009 LINK thread 00:21:46.009 CC test/nvme/fdp/fdp.o 00:21:46.269 CXX test/cpp_headers/ioat_spec.o 00:21:46.269 CC test/nvme/cuse/cuse.o 00:21:46.269 CXX test/cpp_headers/iscsi_spec.o 00:21:46.269 LINK idxd_perf 00:21:46.269 CXX test/cpp_headers/json.o 00:21:46.269 CXX test/cpp_headers/jsonrpc.o 00:21:46.269 LINK doorbell_aers 00:21:46.269 CXX test/cpp_headers/likely.o 00:21:46.269 CXX test/cpp_headers/log.o 00:21:46.269 CXX test/cpp_headers/lvol.o 00:21:46.269 CXX test/cpp_headers/memory.o 00:21:46.269 LINK fdp 00:21:46.269 CXX test/cpp_headers/mmio.o 00:21:46.528 CXX test/cpp_headers/nbd.o 00:21:46.528 CXX test/cpp_headers/notify.o 00:21:46.528 CXX test/cpp_headers/nvme.o 00:21:46.528 CXX test/cpp_headers/nvme_intel.o 00:21:46.528 CXX test/cpp_headers/nvme_ocssd.o 00:21:46.528 CXX test/cpp_headers/nvme_ocssd_spec.o 00:21:46.528 CXX test/cpp_headers/nvme_spec.o 00:21:46.528 CXX test/cpp_headers/nvme_zns.o 00:21:46.528 CXX test/cpp_headers/nvmf_cmd.o 00:21:46.528 CXX test/cpp_headers/nvmf_fc_spec.o 00:21:46.528 CXX test/cpp_headers/nvmf.o 00:21:46.528 CXX test/cpp_headers/nvmf_spec.o 00:21:46.787 CXX test/cpp_headers/nvmf_transport.o 00:21:46.787 CXX test/cpp_headers/opal.o 00:21:46.787 CXX test/cpp_headers/opal_spec.o 00:21:46.787 CXX test/cpp_headers/pci_ids.o 00:21:46.787 CXX test/cpp_headers/pipe.o 00:21:46.787 CXX test/cpp_headers/queue.o 00:21:46.787 CXX test/cpp_headers/reduce.o 00:21:46.787 CXX test/cpp_headers/rpc.o 00:21:46.787 CXX test/cpp_headers/scheduler.o 00:21:46.787 CXX test/cpp_headers/scsi.o 00:21:46.787 CXX test/cpp_headers/scsi_spec.o 00:21:46.787 CXX test/cpp_headers/sock.o 00:21:46.787 CXX test/cpp_headers/stdinc.o 00:21:46.787 CXX test/cpp_headers/string.o 00:21:47.046 CXX test/cpp_headers/thread.o 00:21:47.046 CXX test/cpp_headers/trace.o 00:21:47.046 CXX test/cpp_headers/trace_parser.o 00:21:47.046 CXX test/cpp_headers/tree.o 00:21:47.046 CXX test/cpp_headers/ublk.o 00:21:47.046 CXX test/cpp_headers/util.o 00:21:47.046 CXX test/cpp_headers/uuid.o 00:21:47.046 CXX test/cpp_headers/version.o 00:21:47.046 CXX test/cpp_headers/vfio_user_pci.o 00:21:47.046 CXX test/cpp_headers/vfio_user_spec.o 00:21:47.046 CXX test/cpp_headers/vhost.o 00:21:47.046 CXX test/cpp_headers/vmd.o 00:21:47.046 CXX test/cpp_headers/xor.o 00:21:47.046 LINK esnap 00:21:47.046 CXX test/cpp_headers/zipf.o 00:21:47.046 LINK cuse 00:21:47.615 00:21:47.615 real 0m56.020s 00:21:47.615 user 5m53.250s 00:21:47.615 sys 1m21.794s 00:21:47.615 08:18:20 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:21:47.615 08:18:20 -- common/autotest_common.sh@10 -- $ set +x 00:21:47.615 ************************************ 00:21:47.615 END TEST make 00:21:47.615 ************************************ 00:21:47.615 08:18:20 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:47.615 08:18:20 -- nvmf/common.sh@7 -- # uname -s 00:21:47.615 08:18:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:47.615 08:18:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:47.615 08:18:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:47.615 08:18:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:47.615 08:18:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:47.615 08:18:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:47.615 08:18:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:47.615 08:18:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:47.615 08:18:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:47.615 08:18:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:47.615 08:18:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ce38300-f67f-48af-81f9-d51a7c54746d 00:21:47.615 08:18:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ce38300-f67f-48af-81f9-d51a7c54746d 00:21:47.615 08:18:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:47.615 08:18:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:47.615 08:18:20 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:47.615 08:18:20 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:47.615 08:18:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:47.615 08:18:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:47.615 08:18:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:47.615 08:18:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.875 08:18:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.875 08:18:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.875 08:18:20 -- paths/export.sh@5 -- # export PATH 00:21:47.875 08:18:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.875 08:18:20 -- nvmf/common.sh@46 -- # : 0 00:21:47.875 08:18:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:47.875 08:18:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:47.875 08:18:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:47.875 08:18:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:47.875 08:18:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:47.875 08:18:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:47.875 08:18:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:47.875 08:18:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:47.875 08:18:20 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:21:47.875 08:18:20 -- spdk/autotest.sh@32 -- # uname -s 00:21:47.875 08:18:20 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:21:47.875 08:18:20 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:21:47.875 08:18:20 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:21:47.875 08:18:20 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:21:47.875 08:18:20 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:21:47.875 08:18:20 -- spdk/autotest.sh@44 -- # modprobe nbd 00:21:47.875 08:18:21 -- spdk/autotest.sh@46 -- # type -P udevadm 00:21:47.875 08:18:21 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:21:47.875 08:18:21 -- spdk/autotest.sh@48 -- # udevadm_pid=48075 00:21:47.875 08:18:21 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:21:47.875 08:18:21 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:21:47.875 08:18:21 -- spdk/autotest.sh@54 -- # echo 48089 00:21:47.875 08:18:21 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:21:47.875 08:18:21 -- spdk/autotest.sh@56 -- # echo 48093 00:21:47.875 08:18:21 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:21:47.875 08:18:21 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:21:47.875 08:18:21 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:21:47.875 08:18:21 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:21:47.875 08:18:21 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:47.875 08:18:21 -- common/autotest_common.sh@10 -- # set +x 00:21:47.875 08:18:21 -- spdk/autotest.sh@70 -- # create_test_list 00:21:47.875 08:18:21 -- common/autotest_common.sh@736 -- # xtrace_disable 00:21:47.875 08:18:21 -- common/autotest_common.sh@10 -- # set +x 00:21:47.875 08:18:21 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:21:47.875 08:18:21 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:21:47.875 08:18:21 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:21:47.875 08:18:21 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:21:47.875 08:18:21 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:21:47.875 08:18:21 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:21:47.875 08:18:21 -- common/autotest_common.sh@1440 -- # uname 00:21:47.875 08:18:21 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:21:47.875 08:18:21 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:21:47.875 08:18:21 -- common/autotest_common.sh@1460 -- # uname 00:21:47.875 08:18:21 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:21:47.875 08:18:21 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:21:47.875 08:18:21 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:21:47.875 08:18:21 -- spdk/autotest.sh@83 -- # hash lcov 00:21:47.875 08:18:21 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:21:47.875 08:18:21 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:21:47.875 --rc lcov_branch_coverage=1 00:21:47.875 --rc lcov_function_coverage=1 00:21:47.875 --rc genhtml_branch_coverage=1 00:21:47.875 --rc genhtml_function_coverage=1 00:21:47.875 --rc genhtml_legend=1 00:21:47.875 --rc geninfo_all_blocks=1 00:21:47.875 ' 00:21:47.875 08:18:21 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:21:47.875 --rc lcov_branch_coverage=1 00:21:47.875 --rc lcov_function_coverage=1 00:21:47.876 --rc genhtml_branch_coverage=1 00:21:47.876 --rc genhtml_function_coverage=1 00:21:47.876 --rc genhtml_legend=1 00:21:47.876 --rc geninfo_all_blocks=1 00:21:47.876 ' 00:21:47.876 08:18:21 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:21:47.876 --rc lcov_branch_coverage=1 00:21:47.876 --rc lcov_function_coverage=1 00:21:47.876 --rc genhtml_branch_coverage=1 00:21:47.876 --rc genhtml_function_coverage=1 00:21:47.876 --rc genhtml_legend=1 00:21:47.876 --rc geninfo_all_blocks=1 00:21:47.876 --no-external' 00:21:47.876 08:18:21 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:21:47.876 --rc lcov_branch_coverage=1 00:21:47.876 --rc lcov_function_coverage=1 00:21:47.876 --rc genhtml_branch_coverage=1 00:21:47.876 --rc genhtml_function_coverage=1 00:21:47.876 --rc genhtml_legend=1 00:21:47.876 --rc geninfo_all_blocks=1 00:21:47.876 --no-external' 00:21:47.876 08:18:21 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:21:47.876 lcov: LCOV version 1.14 00:21:47.876 08:18:21 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:21:56.020 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:21:56.020 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:21:56.020 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:21:56.020 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:21:56.020 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:21:56.020 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:22:14.130 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:22:14.130 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:22:14.130 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:22:14.130 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:22:14.130 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:22:14.130 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:22:14.130 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:22:14.130 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:22:14.130 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:22:14.130 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:22:14.130 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:22:14.130 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:22:14.130 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:22:14.130 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:22:14.130 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:22:14.130 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:22:14.390 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:22:14.390 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:22:14.390 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:22:14.390 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:22:14.390 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:22:14.390 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:22:14.390 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:22:14.390 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:22:14.390 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:22:14.390 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:22:14.390 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:22:14.390 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:22:14.390 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:22:14.390 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:22:14.390 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:22:14.390 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:22:14.390 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:22:14.390 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:22:14.390 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:22:14.390 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:22:14.390 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:22:14.390 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:22:14.390 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:22:14.390 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:22:14.390 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:22:14.390 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:22:14.390 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:22:14.390 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:22:14.390 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:22:14.390 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:22:14.390 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:22:14.390 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:22:14.390 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:22:14.390 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:22:14.390 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:22:14.390 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:22:14.390 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:22:14.390 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:22:14.390 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:22:14.390 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:22:14.390 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:22:14.390 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:22:14.390 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:22:14.390 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:22:14.390 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:22:14.390 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:22:14.390 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:22:14.390 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:22:14.390 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:22:14.390 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:22:14.390 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:22:14.390 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:22:14.650 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:22:14.650 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:22:14.650 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:22:14.650 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:22:14.650 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:22:14.650 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:22:14.650 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:22:14.650 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:22:14.650 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:22:14.650 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:22:14.650 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:22:14.650 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:22:14.650 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:22:14.650 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:22:14.650 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:22:14.650 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:22:14.650 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:22:14.650 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:22:14.650 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:22:14.650 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:22:14.650 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:22:14.650 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:22:14.650 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:22:14.650 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:22:14.650 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:22:14.650 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:22:14.650 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:22:14.650 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:22:14.650 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:22:14.650 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:22:14.650 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:22:14.650 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:22:14.650 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:22:14.650 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:22:14.650 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:22:14.650 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:22:14.650 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:22:14.650 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:22:14.650 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:22:14.650 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:22:14.650 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:22:14.650 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:22:14.650 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:22:14.650 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:22:14.650 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:22:14.650 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:22:14.650 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:22:14.650 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:22:14.650 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:22:14.650 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:22:14.650 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:22:14.650 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:22:14.910 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:22:14.911 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:22:14.911 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:22:14.911 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:22:14.911 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:22:14.911 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:22:14.911 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:22:14.911 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:22:14.911 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:22:14.911 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:22:14.911 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:22:14.911 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:22:14.911 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:22:14.911 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:22:14.911 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:22:14.911 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:22:14.911 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:22:14.911 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:22:14.911 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:22:14.911 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:22:14.911 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:22:14.911 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:22:14.911 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:22:14.911 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:22:14.911 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:22:14.911 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:22:14.911 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:22:14.911 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:22:14.911 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:22:14.911 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:22:14.911 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:22:14.911 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:22:14.911 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:22:14.911 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:22:14.911 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:22:14.911 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:22:14.911 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:22:14.911 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:22:14.911 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:22:14.911 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:22:14.911 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:22:14.911 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:22:14.911 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:22:14.911 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:22:14.911 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:22:14.911 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:22:14.911 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:22:14.911 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:22:15.181 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:22:15.181 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:22:15.181 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:22:15.181 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:22:18.470 08:18:51 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:22:18.470 08:18:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:18.470 08:18:51 -- common/autotest_common.sh@10 -- # set +x 00:22:18.470 08:18:51 -- spdk/autotest.sh@102 -- # rm -f 00:22:18.470 08:18:51 -- spdk/autotest.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:22:19.037 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:19.297 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:22:19.297 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:22:19.297 08:18:52 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:22:19.297 08:18:52 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:22:19.297 08:18:52 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:22:19.297 08:18:52 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:22:19.297 08:18:52 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:22:19.297 08:18:52 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:22:19.297 08:18:52 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:22:19.297 08:18:52 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:22:19.297 08:18:52 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:22:19.297 08:18:52 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:22:19.297 08:18:52 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:22:19.297 08:18:52 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:22:19.297 08:18:52 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:22:19.297 08:18:52 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:22:19.297 08:18:52 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:22:19.297 08:18:52 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n2 00:22:19.297 08:18:52 -- common/autotest_common.sh@1647 -- # local device=nvme1n2 00:22:19.297 08:18:52 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:22:19.297 08:18:52 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:22:19.297 08:18:52 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:22:19.297 08:18:52 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n3 00:22:19.297 08:18:52 -- common/autotest_common.sh@1647 -- # local device=nvme1n3 00:22:19.297 08:18:52 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:22:19.297 08:18:52 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:22:19.297 08:18:52 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:22:19.297 08:18:52 -- spdk/autotest.sh@121 -- # grep -v p 00:22:19.297 08:18:52 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 /dev/nvme1n1 /dev/nvme1n2 /dev/nvme1n3 00:22:19.297 08:18:52 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:22:19.297 08:18:52 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:22:19.297 08:18:52 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:22:19.297 08:18:52 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:22:19.297 08:18:52 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:22:19.297 No valid GPT data, bailing 00:22:19.297 08:18:52 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:22:19.297 08:18:52 -- scripts/common.sh@393 -- # pt= 00:22:19.297 08:18:52 -- scripts/common.sh@394 -- # return 1 00:22:19.297 08:18:52 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:22:19.297 1+0 records in 00:22:19.297 1+0 records out 00:22:19.297 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00613293 s, 171 MB/s 00:22:19.297 08:18:52 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:22:19.297 08:18:52 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:22:19.297 08:18:52 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme1n1 00:22:19.297 08:18:52 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:22:19.297 08:18:52 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:22:19.297 No valid GPT data, bailing 00:22:19.297 08:18:52 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:22:19.297 08:18:52 -- scripts/common.sh@393 -- # pt= 00:22:19.297 08:18:52 -- scripts/common.sh@394 -- # return 1 00:22:19.297 08:18:52 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:22:19.556 1+0 records in 00:22:19.556 1+0 records out 00:22:19.556 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0051683 s, 203 MB/s 00:22:19.556 08:18:52 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:22:19.556 08:18:52 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:22:19.556 08:18:52 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme1n2 00:22:19.556 08:18:52 -- scripts/common.sh@380 -- # local block=/dev/nvme1n2 pt 00:22:19.556 08:18:52 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:22:19.556 No valid GPT data, bailing 00:22:19.556 08:18:52 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:22:19.556 08:18:52 -- scripts/common.sh@393 -- # pt= 00:22:19.556 08:18:52 -- scripts/common.sh@394 -- # return 1 00:22:19.556 08:18:52 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:22:19.556 1+0 records in 00:22:19.556 1+0 records out 00:22:19.556 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00431175 s, 243 MB/s 00:22:19.556 08:18:52 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:22:19.556 08:18:52 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:22:19.556 08:18:52 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme1n3 00:22:19.556 08:18:52 -- scripts/common.sh@380 -- # local block=/dev/nvme1n3 pt 00:22:19.557 08:18:52 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:22:19.557 No valid GPT data, bailing 00:22:19.557 08:18:52 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:22:19.557 08:18:52 -- scripts/common.sh@393 -- # pt= 00:22:19.557 08:18:52 -- scripts/common.sh@394 -- # return 1 00:22:19.557 08:18:52 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:22:19.557 1+0 records in 00:22:19.557 1+0 records out 00:22:19.557 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00404601 s, 259 MB/s 00:22:19.557 08:18:52 -- spdk/autotest.sh@129 -- # sync 00:22:19.816 08:18:52 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:22:19.816 08:18:52 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:22:19.816 08:18:52 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:22:22.348 08:18:55 -- spdk/autotest.sh@135 -- # uname -s 00:22:22.348 08:18:55 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:22:22.348 08:18:55 -- spdk/autotest.sh@136 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:22:22.348 08:18:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:22.348 08:18:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:22.348 08:18:55 -- common/autotest_common.sh@10 -- # set +x 00:22:22.348 ************************************ 00:22:22.348 START TEST setup.sh 00:22:22.348 ************************************ 00:22:22.348 08:18:55 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:22:22.348 * Looking for test storage... 00:22:22.348 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:22:22.348 08:18:55 -- setup/test-setup.sh@10 -- # uname -s 00:22:22.348 08:18:55 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:22:22.348 08:18:55 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:22:22.348 08:18:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:22.348 08:18:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:22.348 08:18:55 -- common/autotest_common.sh@10 -- # set +x 00:22:22.348 ************************************ 00:22:22.348 START TEST acl 00:22:22.348 ************************************ 00:22:22.348 08:18:55 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:22:22.348 * Looking for test storage... 00:22:22.348 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:22:22.348 08:18:55 -- setup/acl.sh@10 -- # get_zoned_devs 00:22:22.348 08:18:55 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:22:22.348 08:18:55 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:22:22.348 08:18:55 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:22:22.348 08:18:55 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:22:22.348 08:18:55 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:22:22.348 08:18:55 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:22:22.348 08:18:55 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:22:22.348 08:18:55 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:22:22.348 08:18:55 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:22:22.348 08:18:55 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:22:22.348 08:18:55 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:22:22.348 08:18:55 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:22:22.348 08:18:55 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:22:22.348 08:18:55 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:22:22.348 08:18:55 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n2 00:22:22.348 08:18:55 -- common/autotest_common.sh@1647 -- # local device=nvme1n2 00:22:22.348 08:18:55 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:22:22.348 08:18:55 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:22:22.348 08:18:55 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:22:22.348 08:18:55 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n3 00:22:22.348 08:18:55 -- common/autotest_common.sh@1647 -- # local device=nvme1n3 00:22:22.348 08:18:55 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:22:22.348 08:18:55 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:22:22.348 08:18:55 -- setup/acl.sh@12 -- # devs=() 00:22:22.348 08:18:55 -- setup/acl.sh@12 -- # declare -a devs 00:22:22.348 08:18:55 -- setup/acl.sh@13 -- # drivers=() 00:22:22.348 08:18:55 -- setup/acl.sh@13 -- # declare -A drivers 00:22:22.348 08:18:55 -- setup/acl.sh@51 -- # setup reset 00:22:22.348 08:18:55 -- setup/common.sh@9 -- # [[ reset == output ]] 00:22:22.348 08:18:55 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:22:23.285 08:18:56 -- setup/acl.sh@52 -- # collect_setup_devs 00:22:23.285 08:18:56 -- setup/acl.sh@16 -- # local dev driver 00:22:23.285 08:18:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:22:23.285 08:18:56 -- setup/acl.sh@15 -- # setup output status 00:22:23.285 08:18:56 -- setup/common.sh@9 -- # [[ output == output ]] 00:22:23.286 08:18:56 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:22:23.544 Hugepages 00:22:23.544 node hugesize free / total 00:22:23.544 08:18:56 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:22:23.544 08:18:56 -- setup/acl.sh@19 -- # continue 00:22:23.544 08:18:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:22:23.544 00:22:23.544 Type BDF Vendor Device NUMA Driver Device Block devices 00:22:23.544 08:18:56 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:22:23.544 08:18:56 -- setup/acl.sh@19 -- # continue 00:22:23.544 08:18:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:22:23.544 08:18:56 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:22:23.544 08:18:56 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:22:23.544 08:18:56 -- setup/acl.sh@20 -- # continue 00:22:23.544 08:18:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:22:23.803 08:18:56 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:22:23.803 08:18:56 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:22:23.803 08:18:56 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:22:23.803 08:18:56 -- setup/acl.sh@22 -- # devs+=("$dev") 00:22:23.803 08:18:56 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:22:23.803 08:18:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:22:23.803 08:18:57 -- setup/acl.sh@19 -- # [[ 0000:00:07.0 == *:*:*.* ]] 00:22:23.803 08:18:57 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:22:23.803 08:18:57 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:22:23.803 08:18:57 -- setup/acl.sh@22 -- # devs+=("$dev") 00:22:23.803 08:18:57 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:22:23.803 08:18:57 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:22:23.803 08:18:57 -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:22:23.803 08:18:57 -- setup/acl.sh@54 -- # run_test denied denied 00:22:23.803 08:18:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:23.803 08:18:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:23.803 08:18:57 -- common/autotest_common.sh@10 -- # set +x 00:22:23.803 ************************************ 00:22:23.803 START TEST denied 00:22:23.803 ************************************ 00:22:23.803 08:18:57 -- common/autotest_common.sh@1104 -- # denied 00:22:23.803 08:18:57 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:22:23.803 08:18:57 -- setup/acl.sh@38 -- # setup output config 00:22:23.803 08:18:57 -- setup/common.sh@9 -- # [[ output == output ]] 00:22:23.803 08:18:57 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:22:23.803 08:18:57 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:22:25.181 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:22:25.181 08:18:58 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:22:25.181 08:18:58 -- setup/acl.sh@28 -- # local dev driver 00:22:25.181 08:18:58 -- setup/acl.sh@30 -- # for dev in "$@" 00:22:25.181 08:18:58 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:22:25.181 08:18:58 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:22:25.181 08:18:58 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:22:25.181 08:18:58 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:22:25.181 08:18:58 -- setup/acl.sh@41 -- # setup reset 00:22:25.181 08:18:58 -- setup/common.sh@9 -- # [[ reset == output ]] 00:22:25.181 08:18:58 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:22:25.749 00:22:25.749 real 0m1.747s 00:22:25.749 user 0m0.650s 00:22:25.749 sys 0m1.041s 00:22:25.749 08:18:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:25.749 08:18:58 -- common/autotest_common.sh@10 -- # set +x 00:22:25.749 ************************************ 00:22:25.749 END TEST denied 00:22:25.749 ************************************ 00:22:25.749 08:18:58 -- setup/acl.sh@55 -- # run_test allowed allowed 00:22:25.750 08:18:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:25.750 08:18:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:25.750 08:18:58 -- common/autotest_common.sh@10 -- # set +x 00:22:25.750 ************************************ 00:22:25.750 START TEST allowed 00:22:25.750 ************************************ 00:22:25.750 08:18:58 -- common/autotest_common.sh@1104 -- # allowed 00:22:25.750 08:18:58 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:22:25.750 08:18:58 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:22:25.750 08:18:58 -- setup/acl.sh@45 -- # setup output config 00:22:25.750 08:18:58 -- setup/common.sh@9 -- # [[ output == output ]] 00:22:25.750 08:18:58 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:22:26.689 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:22:26.689 08:18:59 -- setup/acl.sh@47 -- # verify 0000:00:07.0 00:22:26.689 08:18:59 -- setup/acl.sh@28 -- # local dev driver 00:22:26.689 08:18:59 -- setup/acl.sh@30 -- # for dev in "$@" 00:22:26.689 08:18:59 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:07.0 ]] 00:22:26.689 08:18:59 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:07.0/driver 00:22:26.689 08:18:59 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:22:26.689 08:18:59 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:22:26.689 08:18:59 -- setup/acl.sh@48 -- # setup reset 00:22:26.689 08:18:59 -- setup/common.sh@9 -- # [[ reset == output ]] 00:22:26.689 08:18:59 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:22:27.625 00:22:27.625 real 0m1.858s 00:22:27.625 user 0m0.684s 00:22:27.625 sys 0m1.190s 00:22:27.625 08:19:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:27.625 08:19:00 -- common/autotest_common.sh@10 -- # set +x 00:22:27.625 ************************************ 00:22:27.625 END TEST allowed 00:22:27.625 ************************************ 00:22:27.625 00:22:27.625 real 0m5.314s 00:22:27.625 user 0m2.019s 00:22:27.625 sys 0m3.309s 00:22:27.625 08:19:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:27.625 08:19:00 -- common/autotest_common.sh@10 -- # set +x 00:22:27.625 ************************************ 00:22:27.625 END TEST acl 00:22:27.625 ************************************ 00:22:27.625 08:19:00 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:22:27.625 08:19:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:27.625 08:19:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:27.625 08:19:00 -- common/autotest_common.sh@10 -- # set +x 00:22:27.625 ************************************ 00:22:27.625 START TEST hugepages 00:22:27.625 ************************************ 00:22:27.625 08:19:00 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:22:27.625 * Looking for test storage... 00:22:27.625 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:22:27.625 08:19:00 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:22:27.625 08:19:00 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:22:27.625 08:19:00 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:22:27.625 08:19:00 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:22:27.625 08:19:00 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:22:27.885 08:19:00 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:22:27.885 08:19:00 -- setup/common.sh@17 -- # local get=Hugepagesize 00:22:27.885 08:19:00 -- setup/common.sh@18 -- # local node= 00:22:27.885 08:19:00 -- setup/common.sh@19 -- # local var val 00:22:27.885 08:19:00 -- setup/common.sh@20 -- # local mem_f mem 00:22:27.885 08:19:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:22:27.885 08:19:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:22:27.885 08:19:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:22:27.885 08:19:00 -- setup/common.sh@28 -- # mapfile -t mem 00:22:27.885 08:19:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:22:27.885 08:19:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 5620028 kB' 'MemAvailable: 7412308 kB' 'Buffers: 2436 kB' 'Cached: 2004744 kB' 'SwapCached: 0 kB' 'Active: 834956 kB' 'Inactive: 1278828 kB' 'Active(anon): 117092 kB' 'Inactive(anon): 0 kB' 'Active(file): 717864 kB' 'Inactive(file): 1278828 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 108524 kB' 'Mapped: 48708 kB' 'Shmem: 10488 kB' 'KReclaimable: 65052 kB' 'Slab: 140408 kB' 'SReclaimable: 65052 kB' 'SUnreclaim: 75356 kB' 'KernelStack: 6376 kB' 'PageTables: 4236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412440 kB' 'Committed_AS: 342176 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55000 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:22:27.885 08:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:22:27.885 08:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:22:27.885 08:19:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:27.885 08:19:00 -- setup/common.sh@32 -- # continue 00:22:27.885 08:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:22:27.885 08:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:22:27.885 08:19:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:27.885 08:19:00 -- setup/common.sh@32 -- # continue 00:22:27.885 08:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:22:27.885 08:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:22:27.885 08:19:00 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:27.885 08:19:00 -- setup/common.sh@32 -- # continue 00:22:27.885 08:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:22:27.885 08:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:22:27.885 08:19:00 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:27.885 08:19:00 -- setup/common.sh@32 -- # continue 00:22:27.885 08:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:22:27.885 08:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:22:27.885 08:19:00 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:27.885 08:19:00 -- setup/common.sh@32 -- # continue 00:22:27.885 08:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:22:27.885 08:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:22:27.885 08:19:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:27.885 08:19:00 -- setup/common.sh@32 -- # continue 00:22:27.885 08:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:22:27.885 08:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:22:27.885 08:19:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:27.885 08:19:00 -- setup/common.sh@32 -- # continue 00:22:27.885 08:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:22:27.885 08:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:22:27.885 08:19:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:27.885 08:19:00 -- setup/common.sh@32 -- # continue 00:22:27.885 08:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:22:27.885 08:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:22:27.885 08:19:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:27.885 08:19:00 -- setup/common.sh@32 -- # continue 00:22:27.885 08:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:22:27.885 08:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:22:27.885 08:19:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:27.885 08:19:00 -- setup/common.sh@32 -- # continue 00:22:27.885 08:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:22:27.885 08:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:22:27.885 08:19:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:27.886 08:19:00 -- setup/common.sh@32 -- # continue 00:22:27.886 08:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:22:27.886 08:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:22:27.886 08:19:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:27.886 08:19:00 -- setup/common.sh@32 -- # continue 00:22:27.886 08:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:22:27.886 08:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:22:27.886 08:19:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:27.886 08:19:00 -- setup/common.sh@32 -- # continue 00:22:27.886 08:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:22:27.886 08:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:22:27.886 08:19:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:27.886 08:19:00 -- setup/common.sh@32 -- # continue 00:22:27.886 08:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:22:27.886 08:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:22:27.886 08:19:00 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:27.886 08:19:00 -- setup/common.sh@32 -- # continue 00:22:27.886 08:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:22:27.886 08:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:22:27.886 08:19:00 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:27.886 08:19:00 -- setup/common.sh@32 -- # continue 00:22:27.886 08:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:22:27.886 08:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:22:27.886 08:19:00 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:27.886 08:19:00 -- setup/common.sh@32 -- # continue 00:22:27.886 08:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:22:27.886 08:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:22:27.886 08:19:00 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:27.886 08:19:00 -- setup/common.sh@32 -- # continue 00:22:27.886 08:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:22:27.886 08:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:22:27.886 08:19:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:27.886 08:19:00 -- setup/common.sh@32 -- # continue 00:22:27.886 08:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:22:27.886 08:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:22:27.886 08:19:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:27.886 08:19:00 -- setup/common.sh@32 -- # continue 00:22:27.886 08:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:22:27.886 08:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:22:27.886 08:19:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:27.886 08:19:00 -- setup/common.sh@32 -- # continue 00:22:27.886 08:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:22:27.886 08:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:22:27.886 08:19:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:27.886 08:19:00 -- setup/common.sh@32 -- # continue 00:22:27.886 08:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:22:27.886 08:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:22:27.886 08:19:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:27.886 08:19:00 -- setup/common.sh@32 -- # continue 00:22:27.886 08:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:22:27.886 08:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:22:27.886 08:19:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:27.886 08:19:00 -- setup/common.sh@32 -- # continue 00:22:27.886 08:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:22:27.886 08:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:22:27.886 08:19:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:27.886 08:19:00 -- setup/common.sh@32 -- # continue 00:22:27.886 08:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:22:27.886 08:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:22:27.886 08:19:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:27.886 08:19:00 -- setup/common.sh@32 -- # continue 00:22:27.886 08:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:22:27.886 08:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:22:27.886 08:19:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:27.886 08:19:00 -- setup/common.sh@32 -- # continue 00:22:27.886 08:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:22:27.886 08:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:22:27.886 08:19:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:27.886 08:19:00 -- setup/common.sh@32 -- # continue 00:22:27.886 08:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:22:27.886 08:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:22:27.886 08:19:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:27.886 08:19:00 -- setup/common.sh@32 -- # continue 00:22:27.886 08:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:22:27.886 08:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:22:27.886 08:19:00 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:27.886 08:19:00 -- setup/common.sh@32 -- # continue 00:22:27.886 08:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:22:27.886 08:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:22:27.886 08:19:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:27.886 08:19:00 -- setup/common.sh@32 -- # continue 00:22:27.886 08:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:22:27.886 08:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:22:27.886 08:19:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:27.886 08:19:00 -- setup/common.sh@32 -- # continue 00:22:27.886 08:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:22:27.886 08:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:22:27.886 08:19:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:27.886 08:19:00 -- setup/common.sh@32 -- # continue 00:22:27.886 08:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:22:27.886 08:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:22:27.886 08:19:00 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:27.886 08:19:00 -- setup/common.sh@32 -- # continue 00:22:27.886 08:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:22:27.886 08:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:22:27.886 08:19:00 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:27.886 08:19:00 -- setup/common.sh@32 -- # continue 00:22:27.886 08:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:22:27.886 08:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:22:27.886 08:19:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:27.886 08:19:00 -- setup/common.sh@32 -- # continue 00:22:27.886 08:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:22:27.886 08:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:22:27.886 08:19:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:27.886 08:19:00 -- setup/common.sh@32 -- # continue 00:22:27.886 08:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:22:27.886 08:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:22:27.886 08:19:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:27.886 08:19:00 -- setup/common.sh@32 -- # continue 00:22:27.886 08:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:22:27.886 08:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:22:27.886 08:19:00 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:27.886 08:19:00 -- setup/common.sh@32 -- # continue 00:22:27.886 08:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:22:27.886 08:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:22:27.886 08:19:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:27.886 08:19:00 -- setup/common.sh@32 -- # continue 00:22:27.886 08:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:22:27.886 08:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:22:27.886 08:19:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:27.886 08:19:00 -- setup/common.sh@32 -- # continue 00:22:27.886 08:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:22:27.886 08:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:22:27.886 08:19:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:27.886 08:19:00 -- setup/common.sh@32 -- # continue 00:22:27.886 08:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:22:27.886 08:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:22:27.886 08:19:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:27.886 08:19:00 -- setup/common.sh@32 -- # continue 00:22:27.886 08:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:22:27.886 08:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:22:27.886 08:19:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:27.886 08:19:00 -- setup/common.sh@32 -- # continue 00:22:27.886 08:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:22:27.886 08:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:22:27.886 08:19:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:27.886 08:19:00 -- setup/common.sh@32 -- # continue 00:22:27.886 08:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:22:27.886 08:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:22:27.886 08:19:00 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:27.886 08:19:00 -- setup/common.sh@32 -- # continue 00:22:27.886 08:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:22:27.886 08:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:22:27.886 08:19:00 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:27.886 08:19:00 -- setup/common.sh@32 -- # continue 00:22:27.886 08:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:22:27.886 08:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:22:27.886 08:19:00 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:27.886 08:19:00 -- setup/common.sh@32 -- # continue 00:22:27.886 08:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:22:27.886 08:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:22:27.886 08:19:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:27.886 08:19:00 -- setup/common.sh@32 -- # continue 00:22:27.886 08:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:22:27.886 08:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:22:27.886 08:19:00 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:27.886 08:19:00 -- setup/common.sh@32 -- # continue 00:22:27.886 08:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:22:27.887 08:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:22:27.887 08:19:00 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:27.887 08:19:00 -- setup/common.sh@32 -- # continue 00:22:27.887 08:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:22:27.887 08:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:22:27.887 08:19:00 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:27.887 08:19:00 -- setup/common.sh@32 -- # continue 00:22:27.887 08:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:22:27.887 08:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:22:27.887 08:19:00 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:27.887 08:19:00 -- setup/common.sh@33 -- # echo 2048 00:22:27.887 08:19:00 -- setup/common.sh@33 -- # return 0 00:22:27.887 08:19:00 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:22:27.887 08:19:00 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:22:27.887 08:19:00 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:22:27.887 08:19:00 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:22:27.887 08:19:00 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:22:27.887 08:19:00 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:22:27.887 08:19:00 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:22:27.887 08:19:00 -- setup/hugepages.sh@207 -- # get_nodes 00:22:27.887 08:19:00 -- setup/hugepages.sh@27 -- # local node 00:22:27.887 08:19:00 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:22:27.887 08:19:00 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:22:27.887 08:19:00 -- setup/hugepages.sh@32 -- # no_nodes=1 00:22:27.887 08:19:00 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:22:27.887 08:19:00 -- setup/hugepages.sh@208 -- # clear_hp 00:22:27.887 08:19:00 -- setup/hugepages.sh@37 -- # local node hp 00:22:27.887 08:19:00 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:22:27.887 08:19:00 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:22:27.887 08:19:00 -- setup/hugepages.sh@41 -- # echo 0 00:22:27.887 08:19:00 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:22:27.887 08:19:00 -- setup/hugepages.sh@41 -- # echo 0 00:22:27.887 08:19:01 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:22:27.887 08:19:01 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:22:27.887 08:19:01 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:22:27.887 08:19:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:27.887 08:19:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:27.887 08:19:01 -- common/autotest_common.sh@10 -- # set +x 00:22:27.887 ************************************ 00:22:27.887 START TEST default_setup 00:22:27.887 ************************************ 00:22:27.887 08:19:01 -- common/autotest_common.sh@1104 -- # default_setup 00:22:27.887 08:19:01 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:22:27.887 08:19:01 -- setup/hugepages.sh@49 -- # local size=2097152 00:22:27.887 08:19:01 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:22:27.887 08:19:01 -- setup/hugepages.sh@51 -- # shift 00:22:27.887 08:19:01 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:22:27.887 08:19:01 -- setup/hugepages.sh@52 -- # local node_ids 00:22:27.887 08:19:01 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:22:27.887 08:19:01 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:22:27.887 08:19:01 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:22:27.887 08:19:01 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:22:27.887 08:19:01 -- setup/hugepages.sh@62 -- # local user_nodes 00:22:27.887 08:19:01 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:22:27.887 08:19:01 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:22:27.887 08:19:01 -- setup/hugepages.sh@67 -- # nodes_test=() 00:22:27.887 08:19:01 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:22:27.887 08:19:01 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:22:27.887 08:19:01 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:22:27.887 08:19:01 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:22:27.887 08:19:01 -- setup/hugepages.sh@73 -- # return 0 00:22:27.887 08:19:01 -- setup/hugepages.sh@137 -- # setup output 00:22:27.887 08:19:01 -- setup/common.sh@9 -- # [[ output == output ]] 00:22:27.887 08:19:01 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:28.829 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:28.829 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:22:28.829 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:22:28.829 08:19:02 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:22:28.829 08:19:02 -- setup/hugepages.sh@89 -- # local node 00:22:28.829 08:19:02 -- setup/hugepages.sh@90 -- # local sorted_t 00:22:28.829 08:19:02 -- setup/hugepages.sh@91 -- # local sorted_s 00:22:28.829 08:19:02 -- setup/hugepages.sh@92 -- # local surp 00:22:28.829 08:19:02 -- setup/hugepages.sh@93 -- # local resv 00:22:28.829 08:19:02 -- setup/hugepages.sh@94 -- # local anon 00:22:28.829 08:19:02 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:22:28.829 08:19:02 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:22:28.829 08:19:02 -- setup/common.sh@17 -- # local get=AnonHugePages 00:22:28.829 08:19:02 -- setup/common.sh@18 -- # local node= 00:22:28.829 08:19:02 -- setup/common.sh@19 -- # local var val 00:22:28.829 08:19:02 -- setup/common.sh@20 -- # local mem_f mem 00:22:28.829 08:19:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:22:28.829 08:19:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:22:28.829 08:19:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:22:28.829 08:19:02 -- setup/common.sh@28 -- # mapfile -t mem 00:22:28.829 08:19:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:22:28.829 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:28.829 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:28.829 08:19:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7630844 kB' 'MemAvailable: 9422992 kB' 'Buffers: 2436 kB' 'Cached: 2004736 kB' 'SwapCached: 0 kB' 'Active: 847620 kB' 'Inactive: 1278832 kB' 'Active(anon): 129756 kB' 'Inactive(anon): 0 kB' 'Active(file): 717864 kB' 'Inactive(file): 1278832 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 120884 kB' 'Mapped: 48840 kB' 'Shmem: 10464 kB' 'KReclaimable: 64780 kB' 'Slab: 140184 kB' 'SReclaimable: 64780 kB' 'SUnreclaim: 75404 kB' 'KernelStack: 6368 kB' 'PageTables: 4384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55048 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:22:28.829 08:19:02 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:28.829 08:19:02 -- setup/common.sh@32 -- # continue 00:22:28.829 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:28.829 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:28.829 08:19:02 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:28.829 08:19:02 -- setup/common.sh@32 -- # continue 00:22:28.829 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:28.829 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:28.829 08:19:02 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:28.829 08:19:02 -- setup/common.sh@32 -- # continue 00:22:28.829 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:28.829 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:28.829 08:19:02 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:28.829 08:19:02 -- setup/common.sh@32 -- # continue 00:22:28.829 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:28.829 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:28.829 08:19:02 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:28.829 08:19:02 -- setup/common.sh@32 -- # continue 00:22:28.829 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:28.829 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:28.829 08:19:02 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:28.829 08:19:02 -- setup/common.sh@32 -- # continue 00:22:28.829 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:28.829 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:28.829 08:19:02 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:28.829 08:19:02 -- setup/common.sh@32 -- # continue 00:22:28.829 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:28.829 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:28.829 08:19:02 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:28.829 08:19:02 -- setup/common.sh@32 -- # continue 00:22:28.829 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:28.829 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:28.829 08:19:02 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:28.829 08:19:02 -- setup/common.sh@32 -- # continue 00:22:28.829 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:28.829 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:28.829 08:19:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:28.829 08:19:02 -- setup/common.sh@32 -- # continue 00:22:28.829 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:28.829 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:28.829 08:19:02 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:28.829 08:19:02 -- setup/common.sh@32 -- # continue 00:22:28.829 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:28.829 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:28.829 08:19:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:28.830 08:19:02 -- setup/common.sh@32 -- # continue 00:22:28.830 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:28.830 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:28.830 08:19:02 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:28.830 08:19:02 -- setup/common.sh@32 -- # continue 00:22:28.830 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:28.830 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:28.830 08:19:02 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:28.830 08:19:02 -- setup/common.sh@32 -- # continue 00:22:28.830 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:28.830 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:28.830 08:19:02 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:28.830 08:19:02 -- setup/common.sh@32 -- # continue 00:22:28.830 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:28.830 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:28.830 08:19:02 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:28.830 08:19:02 -- setup/common.sh@32 -- # continue 00:22:28.830 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:28.830 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:28.830 08:19:02 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:28.830 08:19:02 -- setup/common.sh@32 -- # continue 00:22:28.830 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:28.830 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:28.830 08:19:02 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:28.830 08:19:02 -- setup/common.sh@32 -- # continue 00:22:28.830 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:28.830 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:28.830 08:19:02 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:28.830 08:19:02 -- setup/common.sh@32 -- # continue 00:22:28.830 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:28.830 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:28.830 08:19:02 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:28.830 08:19:02 -- setup/common.sh@32 -- # continue 00:22:28.830 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:28.830 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:28.830 08:19:02 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:28.830 08:19:02 -- setup/common.sh@32 -- # continue 00:22:28.830 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:28.830 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:28.830 08:19:02 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:28.830 08:19:02 -- setup/common.sh@32 -- # continue 00:22:28.830 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:28.830 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:28.830 08:19:02 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:28.830 08:19:02 -- setup/common.sh@32 -- # continue 00:22:28.830 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:28.830 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:28.830 08:19:02 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:28.830 08:19:02 -- setup/common.sh@32 -- # continue 00:22:28.830 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:28.830 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:28.830 08:19:02 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:28.830 08:19:02 -- setup/common.sh@32 -- # continue 00:22:28.830 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:28.830 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:28.830 08:19:02 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:28.830 08:19:02 -- setup/common.sh@32 -- # continue 00:22:28.830 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.093 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.093 08:19:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:29.093 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.093 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.093 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.093 08:19:02 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:29.093 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.093 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.093 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.093 08:19:02 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:29.093 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.093 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.093 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.093 08:19:02 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:29.093 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.093 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.093 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.093 08:19:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:29.093 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.093 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.093 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.093 08:19:02 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:29.093 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.093 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.093 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.093 08:19:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:29.093 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.093 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.093 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.093 08:19:02 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:29.093 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.093 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.093 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.093 08:19:02 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:29.093 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.093 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.093 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.093 08:19:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:29.093 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.093 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.093 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.093 08:19:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:29.093 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.093 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.093 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.093 08:19:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:29.093 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.093 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.093 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.093 08:19:02 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:29.093 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.093 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.093 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.093 08:19:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:29.093 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.093 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.093 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.093 08:19:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:29.093 08:19:02 -- setup/common.sh@33 -- # echo 0 00:22:29.093 08:19:02 -- setup/common.sh@33 -- # return 0 00:22:29.093 08:19:02 -- setup/hugepages.sh@97 -- # anon=0 00:22:29.093 08:19:02 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:22:29.093 08:19:02 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:22:29.093 08:19:02 -- setup/common.sh@18 -- # local node= 00:22:29.093 08:19:02 -- setup/common.sh@19 -- # local var val 00:22:29.093 08:19:02 -- setup/common.sh@20 -- # local mem_f mem 00:22:29.093 08:19:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:22:29.093 08:19:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:22:29.093 08:19:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:22:29.093 08:19:02 -- setup/common.sh@28 -- # mapfile -t mem 00:22:29.093 08:19:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:22:29.093 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.093 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.093 08:19:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7630844 kB' 'MemAvailable: 9422992 kB' 'Buffers: 2436 kB' 'Cached: 2004736 kB' 'SwapCached: 0 kB' 'Active: 847340 kB' 'Inactive: 1278832 kB' 'Active(anon): 129476 kB' 'Inactive(anon): 0 kB' 'Active(file): 717864 kB' 'Inactive(file): 1278832 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 120600 kB' 'Mapped: 48712 kB' 'Shmem: 10464 kB' 'KReclaimable: 64780 kB' 'Slab: 140176 kB' 'SReclaimable: 64780 kB' 'SUnreclaim: 75396 kB' 'KernelStack: 6400 kB' 'PageTables: 4460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55032 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:22:29.093 08:19:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.093 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.093 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.093 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.093 08:19:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.093 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.093 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.093 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.093 08:19:02 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.093 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.093 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.093 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.093 08:19:02 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.093 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.093 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.093 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.093 08:19:02 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.093 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.093 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.093 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.093 08:19:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.093 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.093 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.093 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.093 08:19:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.093 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.093 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.093 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.093 08:19:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.093 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.093 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.093 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.094 08:19:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.094 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.094 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.094 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.094 08:19:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.094 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.094 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.094 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.094 08:19:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.094 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.094 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.094 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.094 08:19:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.094 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.094 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.094 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.094 08:19:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.094 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.094 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.094 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.094 08:19:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.094 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.094 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.094 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.094 08:19:02 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.094 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.094 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.094 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.094 08:19:02 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.094 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.094 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.094 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.094 08:19:02 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.094 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.094 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.094 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.094 08:19:02 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.094 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.094 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.094 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.094 08:19:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.094 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.094 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.094 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.094 08:19:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.094 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.094 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.094 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.094 08:19:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.094 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.094 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.094 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.094 08:19:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.094 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.094 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.094 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.094 08:19:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.094 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.094 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.094 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.094 08:19:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.094 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.094 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.094 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.094 08:19:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.094 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.094 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.094 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.094 08:19:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.094 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.094 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.094 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.094 08:19:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.094 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.094 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.094 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.094 08:19:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.094 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.094 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.094 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.094 08:19:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.094 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.094 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.094 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.094 08:19:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.094 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.094 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.094 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.094 08:19:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.094 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.094 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.094 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.094 08:19:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.094 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.094 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.094 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.094 08:19:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.094 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.094 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.094 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.094 08:19:02 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.094 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.094 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.094 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.094 08:19:02 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.094 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.094 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.094 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.094 08:19:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.094 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.094 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.094 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.094 08:19:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.094 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.094 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.094 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.094 08:19:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.094 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.094 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.094 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.094 08:19:02 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.094 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.094 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.094 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.094 08:19:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.094 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.094 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.094 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.094 08:19:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.094 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.094 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.094 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.094 08:19:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.094 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.094 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.094 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.094 08:19:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.094 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.094 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.094 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.094 08:19:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.094 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.094 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.094 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.094 08:19:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.094 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.094 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.094 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.094 08:19:02 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.094 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.094 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.094 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.094 08:19:02 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.094 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.094 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.094 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.094 08:19:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.094 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.094 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.094 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.094 08:19:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.094 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.094 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.094 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.094 08:19:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.094 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.095 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.095 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.095 08:19:02 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.095 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.095 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.095 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.095 08:19:02 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.095 08:19:02 -- setup/common.sh@33 -- # echo 0 00:22:29.095 08:19:02 -- setup/common.sh@33 -- # return 0 00:22:29.095 08:19:02 -- setup/hugepages.sh@99 -- # surp=0 00:22:29.095 08:19:02 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:22:29.095 08:19:02 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:22:29.095 08:19:02 -- setup/common.sh@18 -- # local node= 00:22:29.095 08:19:02 -- setup/common.sh@19 -- # local var val 00:22:29.095 08:19:02 -- setup/common.sh@20 -- # local mem_f mem 00:22:29.095 08:19:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:22:29.095 08:19:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:22:29.095 08:19:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:22:29.095 08:19:02 -- setup/common.sh@28 -- # mapfile -t mem 00:22:29.095 08:19:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:22:29.095 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.095 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.095 08:19:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7631104 kB' 'MemAvailable: 9423252 kB' 'Buffers: 2436 kB' 'Cached: 2004736 kB' 'SwapCached: 0 kB' 'Active: 847340 kB' 'Inactive: 1278832 kB' 'Active(anon): 129476 kB' 'Inactive(anon): 0 kB' 'Active(file): 717864 kB' 'Inactive(file): 1278832 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 120636 kB' 'Mapped: 48712 kB' 'Shmem: 10464 kB' 'KReclaimable: 64780 kB' 'Slab: 140176 kB' 'SReclaimable: 64780 kB' 'SUnreclaim: 75396 kB' 'KernelStack: 6416 kB' 'PageTables: 4504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55032 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:22:29.095 08:19:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.095 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.095 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.095 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.095 08:19:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.095 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.095 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.095 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.095 08:19:02 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.095 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.095 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.095 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.095 08:19:02 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.095 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.095 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.095 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.095 08:19:02 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.095 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.095 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.095 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.095 08:19:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.095 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.095 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.095 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.095 08:19:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.095 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.095 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.095 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.095 08:19:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.095 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.095 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.095 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.095 08:19:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.095 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.095 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.095 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.095 08:19:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.095 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.095 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.095 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.095 08:19:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.095 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.095 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.095 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.095 08:19:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.095 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.095 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.095 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.095 08:19:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.095 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.095 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.095 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.095 08:19:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.095 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.095 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.095 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.095 08:19:02 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.095 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.095 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.095 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.095 08:19:02 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.095 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.095 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.095 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.095 08:19:02 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.095 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.095 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.095 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.095 08:19:02 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.095 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.095 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.095 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.095 08:19:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.095 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.095 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.095 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.095 08:19:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.095 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.095 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.095 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.095 08:19:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.095 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.095 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.095 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.095 08:19:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.095 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.095 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.095 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.095 08:19:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.095 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.095 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.095 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.095 08:19:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.095 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.095 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.095 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.095 08:19:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.095 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.095 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.095 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.095 08:19:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.095 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.095 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.095 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.095 08:19:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.095 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.095 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.095 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.095 08:19:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.095 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.095 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.095 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.095 08:19:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.095 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.095 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.095 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.095 08:19:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.095 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.095 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.096 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.096 08:19:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.096 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.096 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.096 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.096 08:19:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.096 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.096 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.096 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.096 08:19:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.096 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.096 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.096 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.096 08:19:02 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.096 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.096 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.096 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.096 08:19:02 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.096 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.096 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.096 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.096 08:19:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.096 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.096 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.096 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.096 08:19:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.096 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.096 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.096 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.096 08:19:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.096 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.096 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.096 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.096 08:19:02 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.096 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.096 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.096 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.096 08:19:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.096 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.096 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.096 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.096 08:19:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.096 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.096 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.096 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.096 08:19:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.096 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.096 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.096 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.096 08:19:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.096 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.096 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.096 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.096 08:19:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.096 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.096 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.096 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.096 08:19:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.096 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.096 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.096 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.096 08:19:02 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.096 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.096 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.096 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.096 08:19:02 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.096 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.096 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.096 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.096 08:19:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.096 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.096 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.096 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.096 08:19:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.096 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.096 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.096 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.096 08:19:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.096 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.096 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.096 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.096 08:19:02 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.096 08:19:02 -- setup/common.sh@33 -- # echo 0 00:22:29.096 08:19:02 -- setup/common.sh@33 -- # return 0 00:22:29.096 08:19:02 -- setup/hugepages.sh@100 -- # resv=0 00:22:29.096 08:19:02 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:22:29.096 nr_hugepages=1024 00:22:29.096 resv_hugepages=0 00:22:29.096 08:19:02 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:22:29.096 surplus_hugepages=0 00:22:29.096 08:19:02 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:22:29.096 anon_hugepages=0 00:22:29.096 08:19:02 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:22:29.096 08:19:02 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:22:29.096 08:19:02 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:22:29.096 08:19:02 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:22:29.096 08:19:02 -- setup/common.sh@17 -- # local get=HugePages_Total 00:22:29.096 08:19:02 -- setup/common.sh@18 -- # local node= 00:22:29.096 08:19:02 -- setup/common.sh@19 -- # local var val 00:22:29.096 08:19:02 -- setup/common.sh@20 -- # local mem_f mem 00:22:29.096 08:19:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:22:29.096 08:19:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:22:29.096 08:19:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:22:29.096 08:19:02 -- setup/common.sh@28 -- # mapfile -t mem 00:22:29.096 08:19:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:22:29.096 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.096 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.096 08:19:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7631144 kB' 'MemAvailable: 9423292 kB' 'Buffers: 2436 kB' 'Cached: 2004736 kB' 'SwapCached: 0 kB' 'Active: 847552 kB' 'Inactive: 1278832 kB' 'Active(anon): 129688 kB' 'Inactive(anon): 0 kB' 'Active(file): 717864 kB' 'Inactive(file): 1278832 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 120860 kB' 'Mapped: 48712 kB' 'Shmem: 10464 kB' 'KReclaimable: 64780 kB' 'Slab: 140180 kB' 'SReclaimable: 64780 kB' 'SUnreclaim: 75400 kB' 'KernelStack: 6400 kB' 'PageTables: 4460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55032 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:22:29.096 08:19:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.096 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.096 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.096 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.096 08:19:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.096 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.096 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.096 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.096 08:19:02 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.096 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.096 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.096 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.096 08:19:02 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.096 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.096 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.096 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.096 08:19:02 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.096 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.096 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.096 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.096 08:19:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.096 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.096 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.096 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.096 08:19:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.096 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.096 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.096 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.096 08:19:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.096 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.096 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.096 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.096 08:19:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.096 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.096 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.096 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.097 08:19:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.097 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.097 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.097 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.097 08:19:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.097 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.097 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.097 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.097 08:19:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.097 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.097 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.097 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.097 08:19:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.097 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.097 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.097 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.097 08:19:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.097 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.097 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.097 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.097 08:19:02 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.097 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.097 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.097 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.097 08:19:02 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.097 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.097 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.097 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.097 08:19:02 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.097 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.097 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.097 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.097 08:19:02 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.097 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.097 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.097 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.097 08:19:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.097 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.097 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.097 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.097 08:19:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.097 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.097 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.097 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.097 08:19:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.097 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.097 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.097 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.097 08:19:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.097 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.097 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.097 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.097 08:19:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.097 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.097 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.097 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.097 08:19:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.097 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.097 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.097 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.097 08:19:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.097 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.097 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.097 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.097 08:19:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.097 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.097 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.097 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.097 08:19:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.097 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.097 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.097 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.097 08:19:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.097 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.097 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.097 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.097 08:19:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.097 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.097 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.097 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.097 08:19:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.097 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.097 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.097 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.097 08:19:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.097 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.097 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.097 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.097 08:19:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.097 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.097 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.097 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.097 08:19:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.097 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.097 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.097 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.097 08:19:02 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.097 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.097 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.097 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.097 08:19:02 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.097 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.097 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.097 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.097 08:19:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.097 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.097 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.097 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.097 08:19:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.097 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.097 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.097 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.097 08:19:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.097 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.097 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.097 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.097 08:19:02 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.097 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.097 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.097 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.097 08:19:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.097 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.097 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.097 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.097 08:19:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.097 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.097 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.097 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.097 08:19:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.097 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.097 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.097 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.097 08:19:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.097 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.097 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.097 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.097 08:19:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.097 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.097 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.097 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.097 08:19:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.097 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.097 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.097 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.098 08:19:02 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.098 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.098 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.098 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.098 08:19:02 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.098 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.098 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.098 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.098 08:19:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.098 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.098 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.098 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.098 08:19:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.098 08:19:02 -- setup/common.sh@33 -- # echo 1024 00:22:29.098 08:19:02 -- setup/common.sh@33 -- # return 0 00:22:29.098 08:19:02 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:22:29.098 08:19:02 -- setup/hugepages.sh@112 -- # get_nodes 00:22:29.098 08:19:02 -- setup/hugepages.sh@27 -- # local node 00:22:29.098 08:19:02 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:22:29.098 08:19:02 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:22:29.098 08:19:02 -- setup/hugepages.sh@32 -- # no_nodes=1 00:22:29.098 08:19:02 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:22:29.098 08:19:02 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:22:29.098 08:19:02 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:22:29.098 08:19:02 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:22:29.098 08:19:02 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:22:29.098 08:19:02 -- setup/common.sh@18 -- # local node=0 00:22:29.098 08:19:02 -- setup/common.sh@19 -- # local var val 00:22:29.098 08:19:02 -- setup/common.sh@20 -- # local mem_f mem 00:22:29.098 08:19:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:22:29.098 08:19:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:22:29.098 08:19:02 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:22:29.098 08:19:02 -- setup/common.sh@28 -- # mapfile -t mem 00:22:29.098 08:19:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:22:29.098 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.098 08:19:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7631144 kB' 'MemUsed: 4610832 kB' 'SwapCached: 0 kB' 'Active: 847552 kB' 'Inactive: 1278832 kB' 'Active(anon): 129688 kB' 'Inactive(anon): 0 kB' 'Active(file): 717864 kB' 'Inactive(file): 1278832 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'FilePages: 2007172 kB' 'Mapped: 48712 kB' 'AnonPages: 120832 kB' 'Shmem: 10464 kB' 'KernelStack: 6384 kB' 'PageTables: 4416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 64780 kB' 'Slab: 140180 kB' 'SReclaimable: 64780 kB' 'SUnreclaim: 75400 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:22:29.098 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.098 08:19:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.098 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.098 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.098 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.098 08:19:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.098 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.098 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.098 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.098 08:19:02 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.098 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.098 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.098 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.098 08:19:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.098 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.098 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.098 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.098 08:19:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.098 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.098 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.098 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.098 08:19:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.098 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.098 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.098 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.098 08:19:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.098 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.098 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.098 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.098 08:19:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.098 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.098 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.098 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.098 08:19:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.098 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.098 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.098 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.098 08:19:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.098 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.098 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.098 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.098 08:19:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.098 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.098 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.098 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.098 08:19:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.098 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.098 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.098 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.098 08:19:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.098 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.098 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.098 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.098 08:19:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.098 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.098 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.098 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.098 08:19:02 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.098 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.098 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.098 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.098 08:19:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.098 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.098 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.098 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.098 08:19:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.098 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.098 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.098 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.098 08:19:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.098 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.098 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.098 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.098 08:19:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.098 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.098 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.098 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.098 08:19:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.098 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.098 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.098 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.098 08:19:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.098 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.098 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.098 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.098 08:19:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.098 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.098 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.098 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.098 08:19:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.098 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.098 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.098 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.098 08:19:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.098 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.098 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.098 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.098 08:19:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.098 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.098 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.098 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.098 08:19:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.098 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.098 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.098 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.098 08:19:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.099 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.099 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.099 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.099 08:19:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.099 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.099 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.099 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.099 08:19:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.099 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.099 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.099 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.099 08:19:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.099 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.099 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.099 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.099 08:19:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.099 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.099 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.099 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.099 08:19:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.099 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.099 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.099 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.099 08:19:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.099 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.099 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.099 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.099 08:19:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.099 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.099 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.099 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.099 08:19:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.099 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.099 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.099 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.099 08:19:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.099 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.099 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.099 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.099 08:19:02 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.099 08:19:02 -- setup/common.sh@33 -- # echo 0 00:22:29.099 08:19:02 -- setup/common.sh@33 -- # return 0 00:22:29.099 08:19:02 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:22:29.099 08:19:02 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:22:29.099 08:19:02 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:22:29.099 08:19:02 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:22:29.099 08:19:02 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:22:29.099 node0=1024 expecting 1024 00:22:29.099 08:19:02 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:22:29.099 00:22:29.099 real 0m1.253s 00:22:29.099 user 0m0.530s 00:22:29.099 sys 0m0.677s 00:22:29.099 08:19:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:29.099 08:19:02 -- common/autotest_common.sh@10 -- # set +x 00:22:29.099 ************************************ 00:22:29.099 END TEST default_setup 00:22:29.099 ************************************ 00:22:29.099 08:19:02 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:22:29.099 08:19:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:29.099 08:19:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:29.099 08:19:02 -- common/autotest_common.sh@10 -- # set +x 00:22:29.099 ************************************ 00:22:29.099 START TEST per_node_1G_alloc 00:22:29.099 ************************************ 00:22:29.099 08:19:02 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:22:29.099 08:19:02 -- setup/hugepages.sh@143 -- # local IFS=, 00:22:29.099 08:19:02 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:22:29.099 08:19:02 -- setup/hugepages.sh@49 -- # local size=1048576 00:22:29.099 08:19:02 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:22:29.099 08:19:02 -- setup/hugepages.sh@51 -- # shift 00:22:29.099 08:19:02 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:22:29.099 08:19:02 -- setup/hugepages.sh@52 -- # local node_ids 00:22:29.099 08:19:02 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:22:29.099 08:19:02 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:22:29.099 08:19:02 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:22:29.099 08:19:02 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:22:29.099 08:19:02 -- setup/hugepages.sh@62 -- # local user_nodes 00:22:29.099 08:19:02 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:22:29.099 08:19:02 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:22:29.099 08:19:02 -- setup/hugepages.sh@67 -- # nodes_test=() 00:22:29.099 08:19:02 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:22:29.099 08:19:02 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:22:29.099 08:19:02 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:22:29.099 08:19:02 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:22:29.099 08:19:02 -- setup/hugepages.sh@73 -- # return 0 00:22:29.099 08:19:02 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:22:29.099 08:19:02 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:22:29.099 08:19:02 -- setup/hugepages.sh@146 -- # setup output 00:22:29.099 08:19:02 -- setup/common.sh@9 -- # [[ output == output ]] 00:22:29.099 08:19:02 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:29.676 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:29.676 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:29.676 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:29.676 08:19:02 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:22:29.676 08:19:02 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:22:29.676 08:19:02 -- setup/hugepages.sh@89 -- # local node 00:22:29.677 08:19:02 -- setup/hugepages.sh@90 -- # local sorted_t 00:22:29.677 08:19:02 -- setup/hugepages.sh@91 -- # local sorted_s 00:22:29.677 08:19:02 -- setup/hugepages.sh@92 -- # local surp 00:22:29.677 08:19:02 -- setup/hugepages.sh@93 -- # local resv 00:22:29.677 08:19:02 -- setup/hugepages.sh@94 -- # local anon 00:22:29.677 08:19:02 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:22:29.677 08:19:02 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:22:29.677 08:19:02 -- setup/common.sh@17 -- # local get=AnonHugePages 00:22:29.677 08:19:02 -- setup/common.sh@18 -- # local node= 00:22:29.677 08:19:02 -- setup/common.sh@19 -- # local var val 00:22:29.677 08:19:02 -- setup/common.sh@20 -- # local mem_f mem 00:22:29.677 08:19:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:22:29.677 08:19:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:22:29.677 08:19:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:22:29.677 08:19:02 -- setup/common.sh@28 -- # mapfile -t mem 00:22:29.677 08:19:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:22:29.677 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.677 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.677 08:19:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8689456 kB' 'MemAvailable: 10481616 kB' 'Buffers: 2436 kB' 'Cached: 2004736 kB' 'SwapCached: 0 kB' 'Active: 847596 kB' 'Inactive: 1278844 kB' 'Active(anon): 129732 kB' 'Inactive(anon): 0 kB' 'Active(file): 717864 kB' 'Inactive(file): 1278844 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 120868 kB' 'Mapped: 48840 kB' 'Shmem: 10464 kB' 'KReclaimable: 64780 kB' 'Slab: 140188 kB' 'SReclaimable: 64780 kB' 'SUnreclaim: 75408 kB' 'KernelStack: 6376 kB' 'PageTables: 4276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 354704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55048 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:22:29.677 08:19:02 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:29.677 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.677 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.677 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.677 08:19:02 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:29.677 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.677 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.677 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.677 08:19:02 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:29.677 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.677 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.677 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.677 08:19:02 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:29.677 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.677 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.677 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.677 08:19:02 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:29.677 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.677 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.677 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.677 08:19:02 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:29.677 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.677 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.677 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.677 08:19:02 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:29.677 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.677 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.677 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.677 08:19:02 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:29.677 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.677 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.677 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.677 08:19:02 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:29.677 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.677 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.677 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.677 08:19:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:29.677 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.677 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.677 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.677 08:19:02 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:29.677 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.677 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.677 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.677 08:19:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:29.677 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.677 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.677 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.677 08:19:02 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:29.677 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.677 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.677 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.677 08:19:02 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:29.677 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.677 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.677 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.677 08:19:02 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:29.677 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.677 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.677 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.677 08:19:02 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:29.677 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.677 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.677 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.677 08:19:02 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:29.677 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.677 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.677 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.677 08:19:02 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:29.677 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.677 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.677 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.677 08:19:02 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:29.677 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.677 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.677 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.677 08:19:02 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:29.677 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.677 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.677 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.677 08:19:02 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:29.677 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.677 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.677 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.677 08:19:02 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:29.677 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.677 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.677 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.677 08:19:02 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:29.677 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.677 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.677 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.677 08:19:02 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:29.677 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.677 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.677 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.677 08:19:02 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:29.677 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.677 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.677 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.677 08:19:02 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:29.677 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.677 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.677 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.677 08:19:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:29.677 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.677 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.677 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.677 08:19:02 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:29.677 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.677 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.677 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.677 08:19:02 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:29.677 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.677 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.677 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.677 08:19:02 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:29.677 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.677 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.677 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.677 08:19:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:29.677 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.677 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.677 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.677 08:19:02 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:29.677 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.677 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.677 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.677 08:19:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:29.678 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.678 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.678 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.678 08:19:02 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:29.678 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.678 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.678 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.678 08:19:02 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:29.678 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.678 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.678 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.678 08:19:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:29.678 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.678 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.678 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.678 08:19:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:29.678 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.678 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.678 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.678 08:19:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:29.678 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.678 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.678 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.678 08:19:02 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:29.678 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.678 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.678 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.678 08:19:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:29.678 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.678 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.678 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.678 08:19:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:29.678 08:19:02 -- setup/common.sh@33 -- # echo 0 00:22:29.678 08:19:02 -- setup/common.sh@33 -- # return 0 00:22:29.678 08:19:02 -- setup/hugepages.sh@97 -- # anon=0 00:22:29.678 08:19:02 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:22:29.678 08:19:02 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:22:29.678 08:19:02 -- setup/common.sh@18 -- # local node= 00:22:29.678 08:19:02 -- setup/common.sh@19 -- # local var val 00:22:29.678 08:19:02 -- setup/common.sh@20 -- # local mem_f mem 00:22:29.678 08:19:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:22:29.678 08:19:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:22:29.678 08:19:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:22:29.678 08:19:02 -- setup/common.sh@28 -- # mapfile -t mem 00:22:29.678 08:19:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:22:29.678 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.678 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.678 08:19:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8689456 kB' 'MemAvailable: 10481616 kB' 'Buffers: 2436 kB' 'Cached: 2004736 kB' 'SwapCached: 0 kB' 'Active: 847368 kB' 'Inactive: 1278844 kB' 'Active(anon): 129504 kB' 'Inactive(anon): 0 kB' 'Active(file): 717864 kB' 'Inactive(file): 1278844 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 120644 kB' 'Mapped: 48712 kB' 'Shmem: 10464 kB' 'KReclaimable: 64780 kB' 'Slab: 140200 kB' 'SReclaimable: 64780 kB' 'SUnreclaim: 75420 kB' 'KernelStack: 6400 kB' 'PageTables: 4456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 354704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55032 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:22:29.678 08:19:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.678 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.678 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.678 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.678 08:19:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.678 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.678 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.678 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.678 08:19:02 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.678 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.678 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.678 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.678 08:19:02 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.678 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.678 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.678 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.678 08:19:02 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.678 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.678 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.678 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.678 08:19:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.678 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.678 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.678 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.678 08:19:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.678 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.678 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.678 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.678 08:19:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.678 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.678 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.678 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.678 08:19:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.678 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.678 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.678 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.678 08:19:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.678 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.678 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.678 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.678 08:19:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.678 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.678 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.678 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.678 08:19:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.678 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.678 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.678 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.678 08:19:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.678 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.678 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.678 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.678 08:19:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.678 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.678 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.678 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.678 08:19:02 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.678 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.678 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.678 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.678 08:19:02 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.678 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.678 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.678 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.678 08:19:02 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.678 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.678 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.678 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.678 08:19:02 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.678 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.678 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.678 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.678 08:19:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.678 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.678 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.678 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.678 08:19:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.678 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.678 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.678 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.678 08:19:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.678 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.678 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.678 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.678 08:19:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.678 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.678 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.678 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.678 08:19:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.678 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.678 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.678 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.678 08:19:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.678 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.678 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.678 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.678 08:19:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.678 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.679 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.679 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.679 08:19:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.679 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.679 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.679 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.679 08:19:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.679 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.679 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.679 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.679 08:19:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.679 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.679 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.679 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.679 08:19:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.679 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.679 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.679 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.679 08:19:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.679 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.679 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.679 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.679 08:19:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.679 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.679 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.679 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.679 08:19:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.679 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.679 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.679 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.679 08:19:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.679 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.679 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.679 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.679 08:19:02 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.679 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.679 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.679 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.679 08:19:02 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.679 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.679 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.679 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.679 08:19:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.679 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.679 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.679 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.679 08:19:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.679 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.679 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.679 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.679 08:19:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.679 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.679 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.679 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.679 08:19:02 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.679 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.679 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.679 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.679 08:19:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.679 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.679 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.679 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.679 08:19:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.679 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.679 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.679 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.679 08:19:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.679 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.679 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.679 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.679 08:19:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.679 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.679 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.679 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.679 08:19:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.679 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.679 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.679 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.679 08:19:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.679 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.679 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.679 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.679 08:19:02 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.679 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.679 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.679 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.679 08:19:02 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.679 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.679 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.679 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.679 08:19:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.679 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.679 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.679 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.679 08:19:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.679 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.679 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.679 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.679 08:19:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.679 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.679 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.679 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.679 08:19:02 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.679 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.679 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.679 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.679 08:19:02 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.679 08:19:02 -- setup/common.sh@33 -- # echo 0 00:22:29.679 08:19:02 -- setup/common.sh@33 -- # return 0 00:22:29.679 08:19:02 -- setup/hugepages.sh@99 -- # surp=0 00:22:29.679 08:19:02 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:22:29.679 08:19:02 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:22:29.679 08:19:02 -- setup/common.sh@18 -- # local node= 00:22:29.679 08:19:02 -- setup/common.sh@19 -- # local var val 00:22:29.679 08:19:02 -- setup/common.sh@20 -- # local mem_f mem 00:22:29.679 08:19:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:22:29.679 08:19:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:22:29.679 08:19:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:22:29.679 08:19:02 -- setup/common.sh@28 -- # mapfile -t mem 00:22:29.679 08:19:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:22:29.679 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.679 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.679 08:19:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8689456 kB' 'MemAvailable: 10481616 kB' 'Buffers: 2436 kB' 'Cached: 2004736 kB' 'SwapCached: 0 kB' 'Active: 847368 kB' 'Inactive: 1278844 kB' 'Active(anon): 129504 kB' 'Inactive(anon): 0 kB' 'Active(file): 717864 kB' 'Inactive(file): 1278844 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 120904 kB' 'Mapped: 48712 kB' 'Shmem: 10464 kB' 'KReclaimable: 64780 kB' 'Slab: 140200 kB' 'SReclaimable: 64780 kB' 'SUnreclaim: 75420 kB' 'KernelStack: 6400 kB' 'PageTables: 4456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 354704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55048 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:22:29.679 08:19:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.679 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.679 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.679 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.679 08:19:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.679 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.679 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.679 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.679 08:19:02 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.679 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.679 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.679 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.679 08:19:02 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.679 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.679 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.679 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.679 08:19:02 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.679 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.679 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.679 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.679 08:19:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.679 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.679 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.679 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.679 08:19:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.679 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.680 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.680 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.680 08:19:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.680 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.680 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.680 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.680 08:19:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.680 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.680 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.680 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.680 08:19:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.680 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.680 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.680 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.680 08:19:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.680 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.680 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.680 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.680 08:19:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.680 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.680 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.680 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.680 08:19:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.680 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.680 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.680 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.680 08:19:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.680 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.680 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.680 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.680 08:19:02 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.680 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.680 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.680 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.680 08:19:02 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.680 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.680 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.680 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.680 08:19:02 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.680 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.680 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.680 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.680 08:19:02 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.680 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.680 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.680 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.680 08:19:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.680 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.680 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.680 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.680 08:19:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.680 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.680 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.680 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.680 08:19:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.680 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.680 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.680 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.680 08:19:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.680 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.680 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.680 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.680 08:19:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.680 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.680 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.680 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.680 08:19:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.680 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.680 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.680 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.680 08:19:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.680 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.680 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.680 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.680 08:19:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.680 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.680 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.680 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.680 08:19:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.680 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.680 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.680 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.680 08:19:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.680 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.680 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.680 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.680 08:19:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.680 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.680 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.680 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.680 08:19:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.680 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.680 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.680 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.680 08:19:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.680 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.680 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.680 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.680 08:19:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.680 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.680 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.680 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.680 08:19:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.680 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.680 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.680 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.680 08:19:02 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.680 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.680 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.680 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.680 08:19:02 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.680 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.680 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.680 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.680 08:19:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.680 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.680 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.680 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.680 08:19:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.680 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.680 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.680 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.680 08:19:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.680 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.680 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.680 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.680 08:19:02 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.680 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.680 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.680 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.680 08:19:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.680 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.680 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.680 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.680 08:19:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.680 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.680 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.680 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.680 08:19:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.680 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.680 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.680 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.680 08:19:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.680 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.680 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.680 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.680 08:19:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.680 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.680 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.680 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.680 08:19:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.680 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.680 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.680 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.680 08:19:02 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.680 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.680 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.680 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.680 08:19:02 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.680 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.680 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.680 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.680 08:19:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.680 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.681 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.681 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.681 08:19:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.681 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.681 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.681 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.681 08:19:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.681 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.681 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.681 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.681 08:19:02 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:29.681 08:19:02 -- setup/common.sh@33 -- # echo 0 00:22:29.681 08:19:02 -- setup/common.sh@33 -- # return 0 00:22:29.681 08:19:02 -- setup/hugepages.sh@100 -- # resv=0 00:22:29.681 08:19:02 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:22:29.681 nr_hugepages=512 00:22:29.681 resv_hugepages=0 00:22:29.681 08:19:02 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:22:29.681 surplus_hugepages=0 00:22:29.681 08:19:02 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:22:29.681 anon_hugepages=0 00:22:29.681 08:19:02 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:22:29.681 08:19:02 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:22:29.681 08:19:02 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:22:29.681 08:19:02 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:22:29.681 08:19:02 -- setup/common.sh@17 -- # local get=HugePages_Total 00:22:29.681 08:19:02 -- setup/common.sh@18 -- # local node= 00:22:29.681 08:19:02 -- setup/common.sh@19 -- # local var val 00:22:29.681 08:19:02 -- setup/common.sh@20 -- # local mem_f mem 00:22:29.681 08:19:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:22:29.681 08:19:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:22:29.681 08:19:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:22:29.681 08:19:02 -- setup/common.sh@28 -- # mapfile -t mem 00:22:29.681 08:19:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:22:29.681 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.681 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.681 08:19:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8689456 kB' 'MemAvailable: 10481616 kB' 'Buffers: 2436 kB' 'Cached: 2004736 kB' 'SwapCached: 0 kB' 'Active: 847596 kB' 'Inactive: 1278844 kB' 'Active(anon): 129732 kB' 'Inactive(anon): 0 kB' 'Active(file): 717864 kB' 'Inactive(file): 1278844 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 120896 kB' 'Mapped: 48712 kB' 'Shmem: 10464 kB' 'KReclaimable: 64780 kB' 'Slab: 140192 kB' 'SReclaimable: 64780 kB' 'SUnreclaim: 75412 kB' 'KernelStack: 6400 kB' 'PageTables: 4456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 354704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55048 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:22:29.681 08:19:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.681 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.681 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.681 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.681 08:19:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.681 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.681 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.681 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.681 08:19:02 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.681 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.681 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.681 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.681 08:19:02 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.681 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.681 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.681 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.681 08:19:02 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.681 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.681 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.681 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.681 08:19:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.681 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.681 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.681 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.681 08:19:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.681 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.681 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.681 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.681 08:19:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.681 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.681 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.681 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.681 08:19:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.681 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.681 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.681 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.681 08:19:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.681 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.681 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.681 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.681 08:19:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.681 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.681 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.681 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.681 08:19:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.681 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.681 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.681 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.681 08:19:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.681 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.681 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.681 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.681 08:19:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.681 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.681 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.681 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.681 08:19:02 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.681 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.681 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.681 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.681 08:19:02 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.681 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.681 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.681 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.681 08:19:02 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.681 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.681 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.681 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.681 08:19:02 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.681 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.681 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.681 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.681 08:19:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.681 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.681 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.681 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.681 08:19:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.681 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.681 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.681 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.681 08:19:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.681 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.681 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.681 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.682 08:19:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.682 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.682 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.682 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.682 08:19:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.682 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.682 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.682 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.682 08:19:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.682 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.682 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.682 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.682 08:19:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.682 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.682 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.682 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.682 08:19:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.682 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.682 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.682 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.682 08:19:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.682 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.682 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.682 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.682 08:19:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.682 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.682 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.682 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.682 08:19:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.682 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.682 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.682 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.682 08:19:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.682 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.682 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.682 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.682 08:19:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.682 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.682 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.682 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.682 08:19:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.682 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.682 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.682 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.682 08:19:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.682 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.682 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.682 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.682 08:19:02 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.682 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.682 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.682 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.682 08:19:02 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.682 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.682 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.682 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.682 08:19:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.682 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.682 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.682 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.682 08:19:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.682 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.682 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.682 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.682 08:19:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.682 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.682 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.682 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.682 08:19:02 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.682 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.682 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.682 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.682 08:19:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.682 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.682 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.682 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.682 08:19:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.682 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.682 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.682 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.682 08:19:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.682 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.682 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.682 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.682 08:19:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.682 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.682 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.682 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.682 08:19:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.682 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.682 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.682 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.682 08:19:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.682 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.682 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.682 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.682 08:19:02 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.682 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.682 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.682 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.682 08:19:02 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.682 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.682 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.682 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.682 08:19:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.682 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.682 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.682 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.682 08:19:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:29.682 08:19:02 -- setup/common.sh@33 -- # echo 512 00:22:29.682 08:19:02 -- setup/common.sh@33 -- # return 0 00:22:29.682 08:19:02 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:22:29.682 08:19:02 -- setup/hugepages.sh@112 -- # get_nodes 00:22:29.682 08:19:02 -- setup/hugepages.sh@27 -- # local node 00:22:29.682 08:19:02 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:22:29.682 08:19:02 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:22:29.682 08:19:02 -- setup/hugepages.sh@32 -- # no_nodes=1 00:22:29.682 08:19:02 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:22:29.682 08:19:02 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:22:29.682 08:19:02 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:22:29.682 08:19:02 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:22:29.682 08:19:02 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:22:29.682 08:19:02 -- setup/common.sh@18 -- # local node=0 00:22:29.682 08:19:02 -- setup/common.sh@19 -- # local var val 00:22:29.682 08:19:02 -- setup/common.sh@20 -- # local mem_f mem 00:22:29.682 08:19:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:22:29.682 08:19:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:22:29.682 08:19:02 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:22:29.682 08:19:02 -- setup/common.sh@28 -- # mapfile -t mem 00:22:29.682 08:19:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:22:29.682 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.682 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.682 08:19:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8689456 kB' 'MemUsed: 3552520 kB' 'SwapCached: 0 kB' 'Active: 847672 kB' 'Inactive: 1278844 kB' 'Active(anon): 129808 kB' 'Inactive(anon): 0 kB' 'Active(file): 717864 kB' 'Inactive(file): 1278844 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'FilePages: 2007172 kB' 'Mapped: 49492 kB' 'AnonPages: 121008 kB' 'Shmem: 10464 kB' 'KernelStack: 6432 kB' 'PageTables: 4556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 64780 kB' 'Slab: 140196 kB' 'SReclaimable: 64780 kB' 'SUnreclaim: 75416 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:22:29.682 08:19:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.682 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.682 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.682 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.682 08:19:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.682 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.682 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.682 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.682 08:19:02 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.682 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.682 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.682 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.682 08:19:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.682 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.682 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.682 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.682 08:19:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.682 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.682 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.683 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.683 08:19:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.683 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.683 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.683 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.683 08:19:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.683 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.683 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.683 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.683 08:19:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.683 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.683 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.683 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.683 08:19:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.683 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.683 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.683 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.683 08:19:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.683 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.683 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.683 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.683 08:19:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.683 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.683 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.683 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.683 08:19:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.683 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.683 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.683 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.683 08:19:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.683 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.683 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.683 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.683 08:19:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.683 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.683 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.683 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.683 08:19:02 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.683 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.683 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.683 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.683 08:19:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.683 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.683 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.683 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.683 08:19:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.683 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.683 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.683 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.683 08:19:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.683 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.683 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.683 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.683 08:19:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.683 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.683 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.683 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.683 08:19:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.683 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.683 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.683 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.683 08:19:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.683 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.683 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.683 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.683 08:19:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.683 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.683 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.683 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.683 08:19:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.683 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.683 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.683 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.683 08:19:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.683 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.683 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.683 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.683 08:19:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.683 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.683 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.683 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.683 08:19:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.683 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.683 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.683 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.683 08:19:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.683 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.683 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.683 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.683 08:19:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.683 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.683 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.683 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.683 08:19:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.683 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.683 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.683 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.683 08:19:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.683 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.683 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.683 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.683 08:19:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.683 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.683 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.683 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.683 08:19:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.683 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.683 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.683 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.683 08:19:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.683 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.683 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.683 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.683 08:19:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.683 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.683 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.683 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.683 08:19:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.683 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.683 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.683 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.683 08:19:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.683 08:19:02 -- setup/common.sh@32 -- # continue 00:22:29.683 08:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:22:29.683 08:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:22:29.683 08:19:02 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:29.683 08:19:02 -- setup/common.sh@33 -- # echo 0 00:22:29.683 08:19:02 -- setup/common.sh@33 -- # return 0 00:22:29.683 08:19:02 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:22:29.683 08:19:02 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:22:29.683 08:19:02 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:22:29.683 08:19:02 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:22:29.683 08:19:02 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:22:29.683 node0=512 expecting 512 00:22:29.683 08:19:02 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:22:29.683 00:22:29.683 real 0m0.640s 00:22:29.683 user 0m0.264s 00:22:29.683 sys 0m0.412s 00:22:29.683 08:19:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:29.683 08:19:02 -- common/autotest_common.sh@10 -- # set +x 00:22:29.683 ************************************ 00:22:29.683 END TEST per_node_1G_alloc 00:22:29.683 ************************************ 00:22:29.943 08:19:03 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:22:29.943 08:19:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:29.943 08:19:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:29.943 08:19:03 -- common/autotest_common.sh@10 -- # set +x 00:22:29.943 ************************************ 00:22:29.943 START TEST even_2G_alloc 00:22:29.943 ************************************ 00:22:29.943 08:19:03 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:22:29.943 08:19:03 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:22:29.943 08:19:03 -- setup/hugepages.sh@49 -- # local size=2097152 00:22:29.943 08:19:03 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:22:29.943 08:19:03 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:22:29.943 08:19:03 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:22:29.943 08:19:03 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:22:29.943 08:19:03 -- setup/hugepages.sh@62 -- # user_nodes=() 00:22:29.943 08:19:03 -- setup/hugepages.sh@62 -- # local user_nodes 00:22:29.943 08:19:03 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:22:29.943 08:19:03 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:22:29.943 08:19:03 -- setup/hugepages.sh@67 -- # nodes_test=() 00:22:29.943 08:19:03 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:22:29.943 08:19:03 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:22:29.943 08:19:03 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:22:29.943 08:19:03 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:22:29.943 08:19:03 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:22:29.943 08:19:03 -- setup/hugepages.sh@83 -- # : 0 00:22:29.943 08:19:03 -- setup/hugepages.sh@84 -- # : 0 00:22:29.943 08:19:03 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:22:29.943 08:19:03 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:22:29.943 08:19:03 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:22:29.943 08:19:03 -- setup/hugepages.sh@153 -- # setup output 00:22:29.943 08:19:03 -- setup/common.sh@9 -- # [[ output == output ]] 00:22:29.943 08:19:03 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:30.203 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:30.203 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:30.203 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:30.466 08:19:03 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:22:30.466 08:19:03 -- setup/hugepages.sh@89 -- # local node 00:22:30.466 08:19:03 -- setup/hugepages.sh@90 -- # local sorted_t 00:22:30.466 08:19:03 -- setup/hugepages.sh@91 -- # local sorted_s 00:22:30.466 08:19:03 -- setup/hugepages.sh@92 -- # local surp 00:22:30.466 08:19:03 -- setup/hugepages.sh@93 -- # local resv 00:22:30.466 08:19:03 -- setup/hugepages.sh@94 -- # local anon 00:22:30.466 08:19:03 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:22:30.466 08:19:03 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:22:30.466 08:19:03 -- setup/common.sh@17 -- # local get=AnonHugePages 00:22:30.466 08:19:03 -- setup/common.sh@18 -- # local node= 00:22:30.466 08:19:03 -- setup/common.sh@19 -- # local var val 00:22:30.466 08:19:03 -- setup/common.sh@20 -- # local mem_f mem 00:22:30.466 08:19:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:22:30.466 08:19:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:22:30.466 08:19:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:22:30.466 08:19:03 -- setup/common.sh@28 -- # mapfile -t mem 00:22:30.466 08:19:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:22:30.466 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.466 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.467 08:19:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7650112 kB' 'MemAvailable: 9442272 kB' 'Buffers: 2436 kB' 'Cached: 2004736 kB' 'SwapCached: 0 kB' 'Active: 847640 kB' 'Inactive: 1278844 kB' 'Active(anon): 129776 kB' 'Inactive(anon): 0 kB' 'Active(file): 717864 kB' 'Inactive(file): 1278844 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 120884 kB' 'Mapped: 48820 kB' 'Shmem: 10464 kB' 'KReclaimable: 64780 kB' 'Slab: 140244 kB' 'SReclaimable: 64780 kB' 'SUnreclaim: 75464 kB' 'KernelStack: 6360 kB' 'PageTables: 4224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55048 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:22:30.467 08:19:03 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:30.467 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.467 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.467 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.467 08:19:03 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:30.467 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.467 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.467 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.467 08:19:03 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:30.467 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.467 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.467 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.467 08:19:03 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:30.467 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.467 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.467 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.467 08:19:03 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:30.467 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.467 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.467 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.467 08:19:03 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:30.467 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.467 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.467 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.467 08:19:03 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:30.467 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.467 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.467 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.467 08:19:03 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:30.467 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.467 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.467 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.467 08:19:03 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:30.467 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.467 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.467 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.467 08:19:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:30.467 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.467 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.467 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.467 08:19:03 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:30.467 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.467 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.467 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.467 08:19:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:30.467 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.467 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.467 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.467 08:19:03 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:30.467 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.467 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.467 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.467 08:19:03 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:30.467 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.467 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.467 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.467 08:19:03 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:30.467 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.467 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.467 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.467 08:19:03 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:30.467 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.467 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.467 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.467 08:19:03 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:30.467 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.467 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.467 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.467 08:19:03 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:30.467 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.467 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.467 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.467 08:19:03 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:30.467 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.467 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.467 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.467 08:19:03 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:30.467 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.467 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.467 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.467 08:19:03 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:30.467 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.467 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.467 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.467 08:19:03 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:30.467 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.467 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.467 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.467 08:19:03 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:30.467 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.467 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.467 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.467 08:19:03 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:30.467 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.467 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.467 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.467 08:19:03 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:30.467 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.467 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.467 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.467 08:19:03 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:30.467 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.467 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.467 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.467 08:19:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:30.467 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.467 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.467 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.467 08:19:03 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:30.467 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.467 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.467 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.467 08:19:03 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:30.467 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.467 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.467 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.467 08:19:03 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:30.467 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.467 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.467 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.467 08:19:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:30.467 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.467 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.467 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.467 08:19:03 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:30.467 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.467 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.467 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.467 08:19:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:30.467 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.467 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.467 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.467 08:19:03 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:30.467 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.467 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.467 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.467 08:19:03 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:30.467 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.467 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.467 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.467 08:19:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:30.467 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.467 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.467 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.467 08:19:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:30.467 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.467 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.467 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.467 08:19:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:30.467 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.467 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.467 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.467 08:19:03 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:30.467 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.467 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.467 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.467 08:19:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:30.467 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.467 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.468 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.468 08:19:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:30.468 08:19:03 -- setup/common.sh@33 -- # echo 0 00:22:30.468 08:19:03 -- setup/common.sh@33 -- # return 0 00:22:30.468 08:19:03 -- setup/hugepages.sh@97 -- # anon=0 00:22:30.468 08:19:03 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:22:30.468 08:19:03 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:22:30.468 08:19:03 -- setup/common.sh@18 -- # local node= 00:22:30.468 08:19:03 -- setup/common.sh@19 -- # local var val 00:22:30.468 08:19:03 -- setup/common.sh@20 -- # local mem_f mem 00:22:30.468 08:19:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:22:30.468 08:19:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:22:30.468 08:19:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:22:30.468 08:19:03 -- setup/common.sh@28 -- # mapfile -t mem 00:22:30.468 08:19:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:22:30.468 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.468 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.468 08:19:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7650112 kB' 'MemAvailable: 9442272 kB' 'Buffers: 2436 kB' 'Cached: 2004736 kB' 'SwapCached: 0 kB' 'Active: 847556 kB' 'Inactive: 1278844 kB' 'Active(anon): 129692 kB' 'Inactive(anon): 0 kB' 'Active(file): 717864 kB' 'Inactive(file): 1278844 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 120800 kB' 'Mapped: 48712 kB' 'Shmem: 10464 kB' 'KReclaimable: 64780 kB' 'Slab: 140248 kB' 'SReclaimable: 64780 kB' 'SUnreclaim: 75468 kB' 'KernelStack: 6384 kB' 'PageTables: 4408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55048 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:22:30.468 08:19:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.468 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.468 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.468 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.468 08:19:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.468 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.468 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.468 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.468 08:19:03 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.468 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.468 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.468 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.468 08:19:03 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.468 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.468 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.468 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.468 08:19:03 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.468 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.468 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.468 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.468 08:19:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.468 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.468 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.468 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.468 08:19:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.468 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.468 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.468 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.468 08:19:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.468 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.468 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.468 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.468 08:19:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.468 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.468 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.468 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.468 08:19:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.468 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.468 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.468 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.468 08:19:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.468 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.468 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.468 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.468 08:19:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.468 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.468 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.468 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.468 08:19:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.468 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.468 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.468 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.468 08:19:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.468 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.468 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.468 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.468 08:19:03 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.468 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.468 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.468 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.468 08:19:03 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.468 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.468 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.468 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.468 08:19:03 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.468 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.468 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.468 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.468 08:19:03 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.468 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.468 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.468 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.468 08:19:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.468 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.468 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.468 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.468 08:19:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.468 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.468 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.468 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.468 08:19:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.468 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.468 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.468 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.468 08:19:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.468 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.468 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.468 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.468 08:19:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.468 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.468 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.468 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.468 08:19:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.468 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.468 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.468 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.468 08:19:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.468 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.468 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.468 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.468 08:19:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.468 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.468 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.468 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.468 08:19:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.468 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.468 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.468 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.468 08:19:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.468 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.468 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.468 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.468 08:19:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.468 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.468 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.468 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.468 08:19:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.468 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.468 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.468 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.468 08:19:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.468 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.468 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.468 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.468 08:19:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.468 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.468 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.468 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.468 08:19:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.468 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.468 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.469 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.469 08:19:03 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.469 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.469 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.469 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.469 08:19:03 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.469 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.469 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.469 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.469 08:19:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.469 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.469 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.469 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.469 08:19:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.469 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.469 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.469 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.469 08:19:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.469 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.469 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.469 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.469 08:19:03 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.469 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.469 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.469 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.469 08:19:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.469 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.469 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.469 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.469 08:19:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.469 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.469 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.469 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.469 08:19:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.469 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.469 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.469 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.469 08:19:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.469 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.469 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.469 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.469 08:19:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.469 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.469 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.469 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.469 08:19:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.469 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.469 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.469 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.469 08:19:03 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.469 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.469 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.469 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.469 08:19:03 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.469 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.469 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.469 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.469 08:19:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.469 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.469 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.469 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.469 08:19:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.469 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.469 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.469 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.469 08:19:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.469 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.469 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.469 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.469 08:19:03 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.469 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.469 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.469 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.469 08:19:03 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.469 08:19:03 -- setup/common.sh@33 -- # echo 0 00:22:30.469 08:19:03 -- setup/common.sh@33 -- # return 0 00:22:30.469 08:19:03 -- setup/hugepages.sh@99 -- # surp=0 00:22:30.469 08:19:03 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:22:30.469 08:19:03 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:22:30.469 08:19:03 -- setup/common.sh@18 -- # local node= 00:22:30.469 08:19:03 -- setup/common.sh@19 -- # local var val 00:22:30.469 08:19:03 -- setup/common.sh@20 -- # local mem_f mem 00:22:30.469 08:19:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:22:30.469 08:19:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:22:30.469 08:19:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:22:30.469 08:19:03 -- setup/common.sh@28 -- # mapfile -t mem 00:22:30.469 08:19:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:22:30.469 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.469 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.469 08:19:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7650620 kB' 'MemAvailable: 9442780 kB' 'Buffers: 2436 kB' 'Cached: 2004736 kB' 'SwapCached: 0 kB' 'Active: 847328 kB' 'Inactive: 1278844 kB' 'Active(anon): 129464 kB' 'Inactive(anon): 0 kB' 'Active(file): 717864 kB' 'Inactive(file): 1278844 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 120652 kB' 'Mapped: 48712 kB' 'Shmem: 10464 kB' 'KReclaimable: 64780 kB' 'Slab: 140228 kB' 'SReclaimable: 64780 kB' 'SUnreclaim: 75448 kB' 'KernelStack: 6368 kB' 'PageTables: 4372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354336 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55000 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:22:30.469 08:19:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:30.469 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.469 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.469 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.469 08:19:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:30.469 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.469 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.469 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.469 08:19:03 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:30.469 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.469 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.469 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.469 08:19:03 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:30.469 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.469 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.469 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.469 08:19:03 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:30.469 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.469 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.469 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.469 08:19:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:30.469 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.469 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.469 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.469 08:19:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:30.469 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.469 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.469 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.469 08:19:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:30.469 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.469 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.469 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.469 08:19:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:30.469 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.469 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.469 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.469 08:19:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:30.469 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.469 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.469 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.469 08:19:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:30.469 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.469 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.469 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.469 08:19:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:30.469 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.469 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.469 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.469 08:19:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:30.469 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.469 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.469 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.469 08:19:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:30.469 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.470 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.470 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.470 08:19:03 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:30.470 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.470 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.470 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.470 08:19:03 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:30.470 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.470 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.470 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.470 08:19:03 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:30.470 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.470 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.470 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.470 08:19:03 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:30.470 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.470 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.470 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.470 08:19:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:30.470 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.470 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.470 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.470 08:19:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:30.470 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.470 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.470 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.470 08:19:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:30.470 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.470 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.470 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.470 08:19:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:30.470 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.470 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.470 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.470 08:19:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:30.470 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.470 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.470 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.470 08:19:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:30.470 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.470 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.470 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.470 08:19:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:30.470 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.470 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.470 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.470 08:19:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:30.470 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.470 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.470 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.470 08:19:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:30.470 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.470 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.470 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.470 08:19:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:30.470 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.470 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.470 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.470 08:19:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:30.470 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.470 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.470 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.470 08:19:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:30.470 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.470 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.470 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.470 08:19:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:30.470 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.470 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.470 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.470 08:19:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:30.470 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.470 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.470 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.470 08:19:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:30.470 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.470 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.470 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.470 08:19:03 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:30.470 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.470 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.470 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.470 08:19:03 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:30.470 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.470 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.470 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.470 08:19:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:30.470 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.470 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.470 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.471 08:19:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:30.471 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.471 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.471 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.471 08:19:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:30.471 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.471 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.471 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.471 08:19:03 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:30.471 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.471 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.471 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.471 08:19:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:30.471 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.471 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.471 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.471 08:19:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:30.471 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.471 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.471 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.471 08:19:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:30.471 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.471 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.471 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.471 08:19:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:30.471 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.471 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.471 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.471 08:19:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:30.471 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.471 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.471 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.471 08:19:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:30.471 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.471 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.471 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.471 08:19:03 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:30.471 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.471 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.471 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.471 08:19:03 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:30.471 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.471 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.471 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.471 08:19:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:30.471 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.471 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.471 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.471 08:19:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:30.471 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.471 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.471 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.471 08:19:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:30.471 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.471 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.471 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.471 08:19:03 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:30.471 08:19:03 -- setup/common.sh@33 -- # echo 0 00:22:30.471 08:19:03 -- setup/common.sh@33 -- # return 0 00:22:30.471 08:19:03 -- setup/hugepages.sh@100 -- # resv=0 00:22:30.471 08:19:03 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:22:30.471 nr_hugepages=1024 00:22:30.471 resv_hugepages=0 00:22:30.471 08:19:03 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:22:30.471 surplus_hugepages=0 00:22:30.471 08:19:03 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:22:30.471 anon_hugepages=0 00:22:30.471 08:19:03 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:22:30.471 08:19:03 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:22:30.471 08:19:03 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:22:30.471 08:19:03 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:22:30.471 08:19:03 -- setup/common.sh@17 -- # local get=HugePages_Total 00:22:30.471 08:19:03 -- setup/common.sh@18 -- # local node= 00:22:30.471 08:19:03 -- setup/common.sh@19 -- # local var val 00:22:30.471 08:19:03 -- setup/common.sh@20 -- # local mem_f mem 00:22:30.471 08:19:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:22:30.471 08:19:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:22:30.471 08:19:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:22:30.471 08:19:03 -- setup/common.sh@28 -- # mapfile -t mem 00:22:30.471 08:19:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:22:30.471 08:19:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7650872 kB' 'MemAvailable: 9443032 kB' 'Buffers: 2436 kB' 'Cached: 2004736 kB' 'SwapCached: 0 kB' 'Active: 847528 kB' 'Inactive: 1278844 kB' 'Active(anon): 129664 kB' 'Inactive(anon): 0 kB' 'Active(file): 717864 kB' 'Inactive(file): 1278844 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 120884 kB' 'Mapped: 48712 kB' 'Shmem: 10464 kB' 'KReclaimable: 64780 kB' 'Slab: 140224 kB' 'SReclaimable: 64780 kB' 'SUnreclaim: 75444 kB' 'KernelStack: 6384 kB' 'PageTables: 4416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55016 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:22:30.471 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.471 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.471 08:19:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:30.471 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.471 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.471 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.471 08:19:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:30.471 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.471 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.472 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.472 08:19:03 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:30.472 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.472 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.472 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.472 08:19:03 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:30.472 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.472 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.472 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.472 08:19:03 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:30.472 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.472 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.472 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.472 08:19:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:30.472 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.472 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.472 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.472 08:19:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:30.472 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.472 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.472 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.472 08:19:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:30.472 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.472 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.472 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.472 08:19:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:30.472 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.472 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.472 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.472 08:19:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:30.472 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.472 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.472 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.472 08:19:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:30.472 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.472 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.472 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.472 08:19:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:30.472 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.472 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.472 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.472 08:19:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:30.472 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.472 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.472 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.472 08:19:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:30.472 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.472 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.472 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.472 08:19:03 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:30.472 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.472 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.472 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.472 08:19:03 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:30.472 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.472 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.472 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.472 08:19:03 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:30.472 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.472 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.472 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.472 08:19:03 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:30.472 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.472 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.472 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.472 08:19:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:30.472 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.472 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.472 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.472 08:19:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:30.472 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.472 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.472 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.472 08:19:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:30.472 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.472 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.472 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.472 08:19:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:30.472 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.472 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.472 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.472 08:19:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:30.472 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.472 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.473 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.473 08:19:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:30.473 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.473 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.473 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.473 08:19:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:30.473 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.473 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.473 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.473 08:19:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:30.473 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.473 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.473 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.473 08:19:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:30.473 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.473 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.473 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.473 08:19:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:30.473 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.473 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.473 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.473 08:19:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:30.473 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.473 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.473 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.473 08:19:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:30.473 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.473 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.473 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.473 08:19:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:30.473 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.473 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.473 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.473 08:19:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:30.473 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.473 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.473 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.473 08:19:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:30.473 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.473 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.473 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.473 08:19:03 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:30.473 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.473 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.473 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.473 08:19:03 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:30.473 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.473 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.473 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.473 08:19:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:30.473 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.473 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.473 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.473 08:19:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:30.473 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.473 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.473 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.473 08:19:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:30.473 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.473 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.473 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.473 08:19:03 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:30.473 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.473 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.473 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.473 08:19:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:30.473 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.473 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.473 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.473 08:19:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:30.473 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.473 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.473 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.473 08:19:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:30.473 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.473 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.473 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.473 08:19:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:30.473 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.473 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.473 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.473 08:19:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:30.473 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.473 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.473 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.473 08:19:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:30.473 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.473 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.473 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.473 08:19:03 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:30.473 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.473 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.473 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.473 08:19:03 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:30.473 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.473 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.473 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.473 08:19:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:30.473 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.473 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.473 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.473 08:19:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:30.473 08:19:03 -- setup/common.sh@33 -- # echo 1024 00:22:30.474 08:19:03 -- setup/common.sh@33 -- # return 0 00:22:30.474 08:19:03 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:22:30.474 08:19:03 -- setup/hugepages.sh@112 -- # get_nodes 00:22:30.474 08:19:03 -- setup/hugepages.sh@27 -- # local node 00:22:30.474 08:19:03 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:22:30.474 08:19:03 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:22:30.474 08:19:03 -- setup/hugepages.sh@32 -- # no_nodes=1 00:22:30.474 08:19:03 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:22:30.474 08:19:03 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:22:30.474 08:19:03 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:22:30.474 08:19:03 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:22:30.474 08:19:03 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:22:30.474 08:19:03 -- setup/common.sh@18 -- # local node=0 00:22:30.474 08:19:03 -- setup/common.sh@19 -- # local var val 00:22:30.474 08:19:03 -- setup/common.sh@20 -- # local mem_f mem 00:22:30.474 08:19:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:22:30.474 08:19:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:22:30.474 08:19:03 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:22:30.474 08:19:03 -- setup/common.sh@28 -- # mapfile -t mem 00:22:30.474 08:19:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:22:30.474 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.474 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.474 08:19:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7650624 kB' 'MemUsed: 4591352 kB' 'SwapCached: 0 kB' 'Active: 847400 kB' 'Inactive: 1278844 kB' 'Active(anon): 129536 kB' 'Inactive(anon): 0 kB' 'Active(file): 717864 kB' 'Inactive(file): 1278844 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'FilePages: 2007172 kB' 'Mapped: 48712 kB' 'AnonPages: 120720 kB' 'Shmem: 10464 kB' 'KernelStack: 6384 kB' 'PageTables: 4412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 64780 kB' 'Slab: 140224 kB' 'SReclaimable: 64780 kB' 'SUnreclaim: 75444 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:22:30.474 08:19:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.474 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.474 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.474 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.474 08:19:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.474 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.474 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.474 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.474 08:19:03 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.474 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.474 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.474 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.474 08:19:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.474 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.474 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.474 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.474 08:19:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.474 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.474 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.474 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.474 08:19:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.474 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.474 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.474 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.474 08:19:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.474 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.474 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.474 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.474 08:19:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.474 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.474 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.474 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.474 08:19:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.474 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.474 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.474 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.474 08:19:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.474 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.474 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.474 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.474 08:19:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.474 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.474 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.474 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.474 08:19:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.474 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.474 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.474 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.474 08:19:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.474 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.474 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.474 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.474 08:19:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.474 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.474 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.474 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.474 08:19:03 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.474 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.474 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.474 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.474 08:19:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.474 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.474 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.474 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.474 08:19:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.474 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.474 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.474 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.474 08:19:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.474 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.474 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.474 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.474 08:19:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.474 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.474 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.475 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.475 08:19:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.475 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.475 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.475 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.475 08:19:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.475 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.475 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.475 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.475 08:19:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.475 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.475 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.475 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.475 08:19:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.475 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.475 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.475 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.475 08:19:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.475 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.475 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.475 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.475 08:19:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.475 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.475 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.475 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.475 08:19:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.475 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.475 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.475 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.475 08:19:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.475 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.475 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.475 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.475 08:19:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.475 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.475 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.475 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.475 08:19:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.475 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.475 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.475 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.475 08:19:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.475 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.475 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.475 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.475 08:19:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.475 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.475 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.475 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.475 08:19:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.475 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.475 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.475 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.475 08:19:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.475 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.475 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.475 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.475 08:19:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.475 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.475 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.475 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.475 08:19:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.475 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.475 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.475 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.475 08:19:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.475 08:19:03 -- setup/common.sh@32 -- # continue 00:22:30.475 08:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:22:30.475 08:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:22:30.475 08:19:03 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:30.475 08:19:03 -- setup/common.sh@33 -- # echo 0 00:22:30.475 08:19:03 -- setup/common.sh@33 -- # return 0 00:22:30.475 08:19:03 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:22:30.475 08:19:03 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:22:30.475 08:19:03 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:22:30.475 08:19:03 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:22:30.475 08:19:03 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:22:30.475 node0=1024 expecting 1024 00:22:30.475 08:19:03 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:22:30.475 00:22:30.475 real 0m0.682s 00:22:30.475 user 0m0.313s 00:22:30.475 sys 0m0.409s 00:22:30.475 08:19:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:30.475 08:19:03 -- common/autotest_common.sh@10 -- # set +x 00:22:30.475 ************************************ 00:22:30.475 END TEST even_2G_alloc 00:22:30.475 ************************************ 00:22:30.475 08:19:03 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:22:30.475 08:19:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:30.475 08:19:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:30.475 08:19:03 -- common/autotest_common.sh@10 -- # set +x 00:22:30.475 ************************************ 00:22:30.475 START TEST odd_alloc 00:22:30.475 ************************************ 00:22:30.475 08:19:03 -- common/autotest_common.sh@1104 -- # odd_alloc 00:22:30.475 08:19:03 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:22:30.475 08:19:03 -- setup/hugepages.sh@49 -- # local size=2098176 00:22:30.475 08:19:03 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:22:30.475 08:19:03 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:22:30.475 08:19:03 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:22:30.475 08:19:03 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:22:30.476 08:19:03 -- setup/hugepages.sh@62 -- # user_nodes=() 00:22:30.476 08:19:03 -- setup/hugepages.sh@62 -- # local user_nodes 00:22:30.476 08:19:03 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:22:30.476 08:19:03 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:22:30.476 08:19:03 -- setup/hugepages.sh@67 -- # nodes_test=() 00:22:30.476 08:19:03 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:22:30.476 08:19:03 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:22:30.476 08:19:03 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:22:30.476 08:19:03 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:22:30.476 08:19:03 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:22:30.476 08:19:03 -- setup/hugepages.sh@83 -- # : 0 00:22:30.476 08:19:03 -- setup/hugepages.sh@84 -- # : 0 00:22:30.476 08:19:03 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:22:30.476 08:19:03 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:22:30.476 08:19:03 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:22:30.476 08:19:03 -- setup/hugepages.sh@160 -- # setup output 00:22:30.476 08:19:03 -- setup/common.sh@9 -- # [[ output == output ]] 00:22:30.476 08:19:03 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:31.064 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:31.064 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:31.064 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:31.064 08:19:04 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:22:31.064 08:19:04 -- setup/hugepages.sh@89 -- # local node 00:22:31.064 08:19:04 -- setup/hugepages.sh@90 -- # local sorted_t 00:22:31.064 08:19:04 -- setup/hugepages.sh@91 -- # local sorted_s 00:22:31.064 08:19:04 -- setup/hugepages.sh@92 -- # local surp 00:22:31.064 08:19:04 -- setup/hugepages.sh@93 -- # local resv 00:22:31.064 08:19:04 -- setup/hugepages.sh@94 -- # local anon 00:22:31.064 08:19:04 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:22:31.064 08:19:04 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:22:31.064 08:19:04 -- setup/common.sh@17 -- # local get=AnonHugePages 00:22:31.064 08:19:04 -- setup/common.sh@18 -- # local node= 00:22:31.064 08:19:04 -- setup/common.sh@19 -- # local var val 00:22:31.064 08:19:04 -- setup/common.sh@20 -- # local mem_f mem 00:22:31.064 08:19:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:22:31.064 08:19:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:22:31.064 08:19:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:22:31.064 08:19:04 -- setup/common.sh@28 -- # mapfile -t mem 00:22:31.064 08:19:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:22:31.064 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.064 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.064 08:19:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7647128 kB' 'MemAvailable: 9439288 kB' 'Buffers: 2436 kB' 'Cached: 2004736 kB' 'SwapCached: 0 kB' 'Active: 847648 kB' 'Inactive: 1278844 kB' 'Active(anon): 129784 kB' 'Inactive(anon): 0 kB' 'Active(file): 717864 kB' 'Inactive(file): 1278844 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 120892 kB' 'Mapped: 48812 kB' 'Shmem: 10464 kB' 'KReclaimable: 64780 kB' 'Slab: 140264 kB' 'SReclaimable: 64780 kB' 'SUnreclaim: 75484 kB' 'KernelStack: 6376 kB' 'PageTables: 4304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 354704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55064 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:22:31.064 08:19:04 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:31.064 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.064 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.064 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.064 08:19:04 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:31.064 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.064 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.064 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.064 08:19:04 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:31.064 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.064 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.064 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.064 08:19:04 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:31.064 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.064 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.064 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.064 08:19:04 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:31.064 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.064 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.064 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.064 08:19:04 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:31.064 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.064 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.064 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.064 08:19:04 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:31.064 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.064 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.064 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.064 08:19:04 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:31.064 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.064 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.064 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.064 08:19:04 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:31.064 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.064 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.064 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.064 08:19:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:31.064 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.064 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.065 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.065 08:19:04 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:31.065 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.065 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.065 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.065 08:19:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:31.065 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.065 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.065 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.065 08:19:04 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:31.065 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.065 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.065 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.065 08:19:04 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:31.065 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.065 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.065 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.065 08:19:04 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:31.065 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.065 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.065 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.065 08:19:04 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:31.065 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.065 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.065 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.065 08:19:04 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:31.065 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.065 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.065 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.065 08:19:04 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:31.065 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.065 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.065 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.065 08:19:04 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:31.065 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.065 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.065 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.065 08:19:04 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:31.065 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.065 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.065 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.065 08:19:04 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:31.065 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.065 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.065 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.065 08:19:04 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:31.065 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.065 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.065 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.065 08:19:04 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:31.065 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.065 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.065 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.065 08:19:04 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:31.065 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.065 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.065 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.065 08:19:04 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:31.065 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.065 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.065 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.066 08:19:04 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:31.066 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.066 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.066 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.066 08:19:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:31.066 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.066 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.066 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.066 08:19:04 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:31.066 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.066 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.066 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.066 08:19:04 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:31.066 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.066 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.066 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.066 08:19:04 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:31.066 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.066 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.066 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.066 08:19:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:31.066 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.066 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.066 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.066 08:19:04 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:31.066 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.066 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.066 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.066 08:19:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:31.066 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.066 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.066 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.066 08:19:04 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:31.066 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.066 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.066 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.066 08:19:04 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:31.066 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.066 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.066 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.067 08:19:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:31.067 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.067 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.067 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.067 08:19:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:31.067 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.067 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.067 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.067 08:19:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:31.067 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.067 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.067 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.067 08:19:04 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:31.067 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.067 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.067 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.067 08:19:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:31.067 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.067 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.067 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.067 08:19:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:31.067 08:19:04 -- setup/common.sh@33 -- # echo 0 00:22:31.067 08:19:04 -- setup/common.sh@33 -- # return 0 00:22:31.067 08:19:04 -- setup/hugepages.sh@97 -- # anon=0 00:22:31.067 08:19:04 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:22:31.067 08:19:04 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:22:31.067 08:19:04 -- setup/common.sh@18 -- # local node= 00:22:31.067 08:19:04 -- setup/common.sh@19 -- # local var val 00:22:31.067 08:19:04 -- setup/common.sh@20 -- # local mem_f mem 00:22:31.067 08:19:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:22:31.067 08:19:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:22:31.067 08:19:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:22:31.067 08:19:04 -- setup/common.sh@28 -- # mapfile -t mem 00:22:31.067 08:19:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:22:31.067 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.067 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.068 08:19:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7646876 kB' 'MemAvailable: 9439036 kB' 'Buffers: 2436 kB' 'Cached: 2004736 kB' 'SwapCached: 0 kB' 'Active: 847404 kB' 'Inactive: 1278844 kB' 'Active(anon): 129540 kB' 'Inactive(anon): 0 kB' 'Active(file): 717864 kB' 'Inactive(file): 1278844 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 120672 kB' 'Mapped: 48712 kB' 'Shmem: 10464 kB' 'KReclaimable: 64780 kB' 'Slab: 140264 kB' 'SReclaimable: 64780 kB' 'SUnreclaim: 75484 kB' 'KernelStack: 6400 kB' 'PageTables: 4456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 354704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55032 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:22:31.068 08:19:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.068 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.068 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.068 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.068 08:19:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.068 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.068 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.068 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.068 08:19:04 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.068 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.068 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.068 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.068 08:19:04 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.068 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.068 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.068 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.068 08:19:04 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.068 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.068 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.068 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.068 08:19:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.068 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.068 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.068 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.068 08:19:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.068 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.068 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.068 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.068 08:19:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.068 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.068 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.068 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.068 08:19:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.068 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.068 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.068 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.068 08:19:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.069 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.069 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.069 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.069 08:19:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.069 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.069 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.069 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.069 08:19:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.069 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.069 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.069 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.069 08:19:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.069 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.069 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.069 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.069 08:19:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.069 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.069 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.069 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.069 08:19:04 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.069 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.069 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.069 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.069 08:19:04 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.069 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.069 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.069 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.069 08:19:04 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.069 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.069 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.069 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.069 08:19:04 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.069 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.069 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.069 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.069 08:19:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.069 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.069 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.069 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.069 08:19:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.070 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.070 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.070 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.070 08:19:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.070 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.070 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.070 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.070 08:19:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.070 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.070 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.070 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.070 08:19:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.070 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.070 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.070 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.070 08:19:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.070 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.070 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.070 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.070 08:19:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.070 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.070 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.070 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.070 08:19:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.070 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.070 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.070 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.070 08:19:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.070 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.070 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.070 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.070 08:19:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.070 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.070 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.070 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.070 08:19:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.070 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.070 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.070 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.070 08:19:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.070 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.070 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.071 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.071 08:19:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.071 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.071 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.071 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.071 08:19:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.071 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.071 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.071 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.071 08:19:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.071 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.071 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.071 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.071 08:19:04 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.071 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.071 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.071 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.071 08:19:04 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.071 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.071 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.071 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.071 08:19:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.071 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.071 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.071 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.071 08:19:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.071 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.071 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.071 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.071 08:19:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.071 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.071 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.071 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.071 08:19:04 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.071 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.071 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.072 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.072 08:19:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.072 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.072 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.072 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.072 08:19:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.072 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.072 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.072 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.072 08:19:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.072 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.072 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.072 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.072 08:19:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.072 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.072 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.072 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.072 08:19:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.072 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.072 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.072 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.072 08:19:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.072 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.072 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.072 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.072 08:19:04 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.072 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.072 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.072 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.072 08:19:04 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.072 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.072 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.072 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.072 08:19:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.072 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.072 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.072 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.072 08:19:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.072 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.072 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.072 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.072 08:19:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.072 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.072 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.072 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.073 08:19:04 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.073 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.073 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.073 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.073 08:19:04 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.073 08:19:04 -- setup/common.sh@33 -- # echo 0 00:22:31.073 08:19:04 -- setup/common.sh@33 -- # return 0 00:22:31.073 08:19:04 -- setup/hugepages.sh@99 -- # surp=0 00:22:31.073 08:19:04 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:22:31.073 08:19:04 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:22:31.073 08:19:04 -- setup/common.sh@18 -- # local node= 00:22:31.073 08:19:04 -- setup/common.sh@19 -- # local var val 00:22:31.073 08:19:04 -- setup/common.sh@20 -- # local mem_f mem 00:22:31.073 08:19:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:22:31.073 08:19:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:22:31.073 08:19:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:22:31.073 08:19:04 -- setup/common.sh@28 -- # mapfile -t mem 00:22:31.073 08:19:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:22:31.073 08:19:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7646876 kB' 'MemAvailable: 9439036 kB' 'Buffers: 2436 kB' 'Cached: 2004736 kB' 'SwapCached: 0 kB' 'Active: 847440 kB' 'Inactive: 1278844 kB' 'Active(anon): 129576 kB' 'Inactive(anon): 0 kB' 'Active(file): 717864 kB' 'Inactive(file): 1278844 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 120704 kB' 'Mapped: 48712 kB' 'Shmem: 10464 kB' 'KReclaimable: 64780 kB' 'Slab: 140264 kB' 'SReclaimable: 64780 kB' 'SUnreclaim: 75484 kB' 'KernelStack: 6416 kB' 'PageTables: 4500 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 354704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55032 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:22:31.073 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.073 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.073 08:19:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.073 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.073 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.073 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.073 08:19:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.073 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.073 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.073 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.073 08:19:04 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.073 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.073 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.074 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.074 08:19:04 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.074 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.074 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.074 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.074 08:19:04 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.074 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.074 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.074 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.074 08:19:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.074 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.074 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.074 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.074 08:19:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.074 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.074 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.074 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.074 08:19:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.074 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.074 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.074 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.074 08:19:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.074 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.074 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.074 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.074 08:19:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.074 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.074 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.074 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.074 08:19:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.074 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.074 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.074 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.074 08:19:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.074 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.074 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.074 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.074 08:19:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.074 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.075 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.075 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.075 08:19:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.075 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.075 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.075 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.075 08:19:04 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.075 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.075 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.075 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.075 08:19:04 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.075 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.075 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.075 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.075 08:19:04 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.075 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.075 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.075 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.075 08:19:04 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.075 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.075 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.075 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.075 08:19:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.075 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.075 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.075 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.075 08:19:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.075 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.075 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.075 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.075 08:19:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.075 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.075 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.075 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.075 08:19:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.075 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.075 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.075 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.076 08:19:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.076 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.076 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.076 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.076 08:19:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.076 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.076 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.076 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.076 08:19:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.076 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.076 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.076 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.076 08:19:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.076 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.076 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.076 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.076 08:19:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.076 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.076 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.076 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.076 08:19:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.076 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.076 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.076 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.076 08:19:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.076 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.076 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.076 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.076 08:19:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.076 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.076 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.076 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.076 08:19:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.076 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.076 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.076 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.076 08:19:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.076 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.076 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.076 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.077 08:19:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.077 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.077 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.077 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.077 08:19:04 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.077 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.077 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.077 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.077 08:19:04 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.077 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.077 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.077 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.077 08:19:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.077 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.077 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.077 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.077 08:19:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.077 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.077 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.077 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.077 08:19:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.077 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.077 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.077 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.077 08:19:04 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.077 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.077 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.077 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.077 08:19:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.077 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.077 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.077 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.077 08:19:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.077 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.077 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.077 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.077 08:19:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.077 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.077 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.077 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.077 08:19:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.077 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.078 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.078 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.078 08:19:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.078 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.078 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.078 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.078 08:19:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.078 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.078 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.078 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.078 08:19:04 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.078 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.078 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.078 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.078 08:19:04 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.078 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.078 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.078 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.078 08:19:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.078 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.078 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.078 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.078 08:19:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.078 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.078 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.078 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.078 08:19:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.078 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.078 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.078 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.078 08:19:04 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.078 08:19:04 -- setup/common.sh@33 -- # echo 0 00:22:31.078 08:19:04 -- setup/common.sh@33 -- # return 0 00:22:31.078 08:19:04 -- setup/hugepages.sh@100 -- # resv=0 00:22:31.078 08:19:04 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:22:31.078 nr_hugepages=1025 00:22:31.078 resv_hugepages=0 00:22:31.078 08:19:04 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:22:31.078 surplus_hugepages=0 00:22:31.078 08:19:04 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:22:31.078 08:19:04 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:22:31.078 anon_hugepages=0 00:22:31.078 08:19:04 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:22:31.079 08:19:04 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:22:31.079 08:19:04 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:22:31.079 08:19:04 -- setup/common.sh@17 -- # local get=HugePages_Total 00:22:31.079 08:19:04 -- setup/common.sh@18 -- # local node= 00:22:31.079 08:19:04 -- setup/common.sh@19 -- # local var val 00:22:31.079 08:19:04 -- setup/common.sh@20 -- # local mem_f mem 00:22:31.079 08:19:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:22:31.079 08:19:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:22:31.079 08:19:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:22:31.079 08:19:04 -- setup/common.sh@28 -- # mapfile -t mem 00:22:31.079 08:19:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:22:31.079 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.079 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.079 08:19:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7653088 kB' 'MemAvailable: 9445248 kB' 'Buffers: 2436 kB' 'Cached: 2004736 kB' 'SwapCached: 0 kB' 'Active: 847640 kB' 'Inactive: 1278844 kB' 'Active(anon): 129776 kB' 'Inactive(anon): 0 kB' 'Active(file): 717864 kB' 'Inactive(file): 1278844 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 120940 kB' 'Mapped: 48712 kB' 'Shmem: 10464 kB' 'KReclaimable: 64780 kB' 'Slab: 140260 kB' 'SReclaimable: 64780 kB' 'SUnreclaim: 75480 kB' 'KernelStack: 6400 kB' 'PageTables: 4456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 354704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55032 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:22:31.079 08:19:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.079 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.079 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.079 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.079 08:19:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.079 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.079 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.079 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.079 08:19:04 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.079 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.079 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.079 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.079 08:19:04 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.079 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.080 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.080 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.080 08:19:04 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.080 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.080 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.080 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.080 08:19:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.080 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.080 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.080 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.080 08:19:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.080 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.080 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.080 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.080 08:19:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.080 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.080 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.080 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.080 08:19:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.080 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.080 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.081 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.081 08:19:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.081 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.081 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.081 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.081 08:19:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.081 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.081 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.081 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.081 08:19:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.081 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.081 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.081 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.081 08:19:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.081 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.081 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.081 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.081 08:19:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.081 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.081 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.081 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.081 08:19:04 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.081 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.081 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.081 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.081 08:19:04 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.081 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.081 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.081 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.081 08:19:04 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.081 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.081 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.081 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.081 08:19:04 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.081 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.081 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.082 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.082 08:19:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.082 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.082 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.082 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.082 08:19:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.082 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.082 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.082 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.082 08:19:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.082 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.082 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.082 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.082 08:19:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.082 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.082 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.082 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.082 08:19:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.082 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.082 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.082 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.082 08:19:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.082 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.082 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.082 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.082 08:19:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.082 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.082 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.082 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.082 08:19:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.082 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.082 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.082 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.082 08:19:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.082 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.082 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.082 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.082 08:19:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.082 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.082 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.082 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.083 08:19:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.083 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.083 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.083 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.083 08:19:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.083 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.083 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.083 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.083 08:19:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.083 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.083 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.083 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.083 08:19:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.083 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.083 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.083 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.083 08:19:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.083 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.083 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.083 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.083 08:19:04 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.083 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.083 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.083 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.083 08:19:04 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.083 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.083 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.083 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.083 08:19:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.083 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.083 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.083 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.083 08:19:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.083 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.083 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.083 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.083 08:19:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.083 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.083 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.083 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.083 08:19:04 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.083 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.083 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.084 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.084 08:19:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.084 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.084 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.084 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.084 08:19:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.084 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.084 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.084 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.084 08:19:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.084 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.084 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.084 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.084 08:19:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.084 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.084 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.084 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.084 08:19:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.084 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.084 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.084 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.084 08:19:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.084 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.084 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.084 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.084 08:19:04 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.084 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.084 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.084 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.084 08:19:04 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.084 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.084 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.084 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.084 08:19:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.084 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.084 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.084 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.084 08:19:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.084 08:19:04 -- setup/common.sh@33 -- # echo 1025 00:22:31.085 08:19:04 -- setup/common.sh@33 -- # return 0 00:22:31.085 08:19:04 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:22:31.085 08:19:04 -- setup/hugepages.sh@112 -- # get_nodes 00:22:31.085 08:19:04 -- setup/hugepages.sh@27 -- # local node 00:22:31.085 08:19:04 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:22:31.085 08:19:04 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:22:31.085 08:19:04 -- setup/hugepages.sh@32 -- # no_nodes=1 00:22:31.085 08:19:04 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:22:31.085 08:19:04 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:22:31.085 08:19:04 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:22:31.085 08:19:04 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:22:31.085 08:19:04 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:22:31.085 08:19:04 -- setup/common.sh@18 -- # local node=0 00:22:31.085 08:19:04 -- setup/common.sh@19 -- # local var val 00:22:31.085 08:19:04 -- setup/common.sh@20 -- # local mem_f mem 00:22:31.085 08:19:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:22:31.085 08:19:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:22:31.085 08:19:04 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:22:31.085 08:19:04 -- setup/common.sh@28 -- # mapfile -t mem 00:22:31.085 08:19:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:22:31.085 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.085 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.085 08:19:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7654128 kB' 'MemUsed: 4587848 kB' 'SwapCached: 0 kB' 'Active: 847620 kB' 'Inactive: 1278844 kB' 'Active(anon): 129756 kB' 'Inactive(anon): 0 kB' 'Active(file): 717864 kB' 'Inactive(file): 1278844 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'FilePages: 2007172 kB' 'Mapped: 48712 kB' 'AnonPages: 120872 kB' 'Shmem: 10464 kB' 'KernelStack: 6384 kB' 'PageTables: 4412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 64780 kB' 'Slab: 140260 kB' 'SReclaimable: 64780 kB' 'SUnreclaim: 75480 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:22:31.085 08:19:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.085 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.085 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.085 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.085 08:19:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.085 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.085 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.085 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.085 08:19:04 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.085 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.085 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.085 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.086 08:19:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.086 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.086 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.086 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.086 08:19:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.086 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.086 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.086 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.086 08:19:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.086 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.086 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.086 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.086 08:19:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.086 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.086 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.086 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.086 08:19:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.086 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.086 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.086 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.086 08:19:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.086 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.086 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.086 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.086 08:19:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.086 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.086 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.086 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.086 08:19:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.086 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.086 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.086 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.086 08:19:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.086 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.086 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.086 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.086 08:19:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.086 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.086 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.086 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.086 08:19:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.086 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.086 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.086 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.086 08:19:04 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.086 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.086 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.086 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.086 08:19:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.086 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.086 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.086 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.086 08:19:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.086 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.086 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.086 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.086 08:19:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.086 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.086 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.086 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.350 08:19:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.350 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.350 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.350 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.350 08:19:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.350 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.350 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.350 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.350 08:19:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.350 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.350 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.350 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.350 08:19:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.350 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.350 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.350 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.350 08:19:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.350 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.350 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.350 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.350 08:19:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.350 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.350 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.350 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.350 08:19:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.350 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.350 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.350 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.350 08:19:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.350 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.350 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.350 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.350 08:19:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.350 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.350 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.350 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.350 08:19:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.350 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.350 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.350 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.350 08:19:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.350 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.350 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.350 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.350 08:19:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.350 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.350 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.350 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.350 08:19:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.350 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.350 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.350 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.350 08:19:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.350 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.350 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.350 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.350 08:19:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.350 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.350 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.350 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.350 08:19:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.350 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.350 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.350 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.350 08:19:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.350 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.350 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.350 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.350 08:19:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.350 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.350 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.350 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.350 08:19:04 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.350 08:19:04 -- setup/common.sh@33 -- # echo 0 00:22:31.350 08:19:04 -- setup/common.sh@33 -- # return 0 00:22:31.350 08:19:04 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:22:31.350 08:19:04 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:22:31.350 08:19:04 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:22:31.350 08:19:04 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:22:31.350 08:19:04 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:22:31.350 node0=1025 expecting 1025 00:22:31.350 08:19:04 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:22:31.350 00:22:31.350 real 0m0.628s 00:22:31.350 user 0m0.276s 00:22:31.350 sys 0m0.388s 00:22:31.350 08:19:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:31.350 08:19:04 -- common/autotest_common.sh@10 -- # set +x 00:22:31.350 ************************************ 00:22:31.350 END TEST odd_alloc 00:22:31.350 ************************************ 00:22:31.350 08:19:04 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:22:31.350 08:19:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:31.350 08:19:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:31.350 08:19:04 -- common/autotest_common.sh@10 -- # set +x 00:22:31.350 ************************************ 00:22:31.350 START TEST custom_alloc 00:22:31.350 ************************************ 00:22:31.350 08:19:04 -- common/autotest_common.sh@1104 -- # custom_alloc 00:22:31.350 08:19:04 -- setup/hugepages.sh@167 -- # local IFS=, 00:22:31.350 08:19:04 -- setup/hugepages.sh@169 -- # local node 00:22:31.350 08:19:04 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:22:31.350 08:19:04 -- setup/hugepages.sh@170 -- # local nodes_hp 00:22:31.350 08:19:04 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:22:31.350 08:19:04 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:22:31.350 08:19:04 -- setup/hugepages.sh@49 -- # local size=1048576 00:22:31.350 08:19:04 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:22:31.351 08:19:04 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:22:31.351 08:19:04 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:22:31.351 08:19:04 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:22:31.351 08:19:04 -- setup/hugepages.sh@62 -- # user_nodes=() 00:22:31.351 08:19:04 -- setup/hugepages.sh@62 -- # local user_nodes 00:22:31.351 08:19:04 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:22:31.351 08:19:04 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:22:31.351 08:19:04 -- setup/hugepages.sh@67 -- # nodes_test=() 00:22:31.351 08:19:04 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:22:31.351 08:19:04 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:22:31.351 08:19:04 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:22:31.351 08:19:04 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:22:31.351 08:19:04 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:22:31.351 08:19:04 -- setup/hugepages.sh@83 -- # : 0 00:22:31.351 08:19:04 -- setup/hugepages.sh@84 -- # : 0 00:22:31.351 08:19:04 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:22:31.351 08:19:04 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:22:31.351 08:19:04 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:22:31.351 08:19:04 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:22:31.351 08:19:04 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:22:31.351 08:19:04 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:22:31.351 08:19:04 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:22:31.351 08:19:04 -- setup/hugepages.sh@62 -- # user_nodes=() 00:22:31.351 08:19:04 -- setup/hugepages.sh@62 -- # local user_nodes 00:22:31.351 08:19:04 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:22:31.351 08:19:04 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:22:31.351 08:19:04 -- setup/hugepages.sh@67 -- # nodes_test=() 00:22:31.351 08:19:04 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:22:31.351 08:19:04 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:22:31.351 08:19:04 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:22:31.351 08:19:04 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:22:31.351 08:19:04 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:22:31.351 08:19:04 -- setup/hugepages.sh@78 -- # return 0 00:22:31.351 08:19:04 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:22:31.351 08:19:04 -- setup/hugepages.sh@187 -- # setup output 00:22:31.351 08:19:04 -- setup/common.sh@9 -- # [[ output == output ]] 00:22:31.351 08:19:04 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:31.610 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:31.610 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:31.610 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:31.874 08:19:04 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:22:31.874 08:19:04 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:22:31.874 08:19:04 -- setup/hugepages.sh@89 -- # local node 00:22:31.874 08:19:04 -- setup/hugepages.sh@90 -- # local sorted_t 00:22:31.874 08:19:04 -- setup/hugepages.sh@91 -- # local sorted_s 00:22:31.874 08:19:04 -- setup/hugepages.sh@92 -- # local surp 00:22:31.874 08:19:04 -- setup/hugepages.sh@93 -- # local resv 00:22:31.874 08:19:04 -- setup/hugepages.sh@94 -- # local anon 00:22:31.874 08:19:04 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:22:31.874 08:19:04 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:22:31.874 08:19:04 -- setup/common.sh@17 -- # local get=AnonHugePages 00:22:31.874 08:19:04 -- setup/common.sh@18 -- # local node= 00:22:31.874 08:19:04 -- setup/common.sh@19 -- # local var val 00:22:31.874 08:19:04 -- setup/common.sh@20 -- # local mem_f mem 00:22:31.874 08:19:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:22:31.874 08:19:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:22:31.874 08:19:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:22:31.874 08:19:04 -- setup/common.sh@28 -- # mapfile -t mem 00:22:31.874 08:19:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:22:31.874 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.874 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.874 08:19:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8701320 kB' 'MemAvailable: 10493484 kB' 'Buffers: 2436 kB' 'Cached: 2004740 kB' 'SwapCached: 0 kB' 'Active: 847740 kB' 'Inactive: 1278848 kB' 'Active(anon): 129876 kB' 'Inactive(anon): 0 kB' 'Active(file): 717864 kB' 'Inactive(file): 1278848 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 121260 kB' 'Mapped: 48840 kB' 'Shmem: 10464 kB' 'KReclaimable: 64780 kB' 'Slab: 140252 kB' 'SReclaimable: 64780 kB' 'SUnreclaim: 75472 kB' 'KernelStack: 6408 kB' 'PageTables: 4600 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 354836 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55048 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:22:31.874 08:19:04 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:31.874 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.874 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.874 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.874 08:19:04 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:31.874 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.874 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.874 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.874 08:19:04 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:31.874 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.874 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.874 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.874 08:19:04 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:31.874 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.874 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.874 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.874 08:19:04 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:31.874 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.874 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.875 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.875 08:19:04 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:31.875 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.875 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.875 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.875 08:19:04 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:31.875 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.875 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.875 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.875 08:19:04 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:31.875 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.875 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.875 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.875 08:19:04 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:31.875 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.875 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.875 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.875 08:19:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:31.875 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.875 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.875 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.875 08:19:04 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:31.875 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.875 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.875 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.875 08:19:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:31.875 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.875 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.875 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.875 08:19:04 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:31.875 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.875 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.875 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.875 08:19:04 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:31.875 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.875 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.875 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.875 08:19:04 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:31.875 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.875 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.875 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.875 08:19:04 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:31.875 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.875 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.875 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.875 08:19:04 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:31.875 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.875 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.875 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.875 08:19:04 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:31.875 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.875 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.875 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.875 08:19:04 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:31.875 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.875 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.875 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.875 08:19:04 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:31.875 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.875 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.875 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.875 08:19:04 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:31.875 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.875 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.875 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.875 08:19:04 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:31.875 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.875 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.875 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.875 08:19:04 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:31.875 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.875 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.875 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.875 08:19:04 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:31.875 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.875 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.875 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.875 08:19:04 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:31.875 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.875 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.875 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.875 08:19:04 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:31.875 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.875 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.875 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.875 08:19:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:31.875 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.875 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.875 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.875 08:19:04 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:31.875 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.875 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.875 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.875 08:19:04 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:31.875 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.875 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.875 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.875 08:19:04 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:31.875 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.875 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.875 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.875 08:19:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:31.875 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.875 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.875 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.875 08:19:04 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:31.875 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.875 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.875 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.875 08:19:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:31.875 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.875 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.875 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.875 08:19:04 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:31.875 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.875 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.875 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.875 08:19:04 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:31.875 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.875 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.875 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.875 08:19:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:31.875 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.875 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.875 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.875 08:19:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:31.875 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.875 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.875 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.875 08:19:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:31.875 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.875 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.875 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.875 08:19:04 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:31.875 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.875 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.875 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.875 08:19:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:31.876 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.876 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.876 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.876 08:19:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:31.876 08:19:04 -- setup/common.sh@33 -- # echo 0 00:22:31.876 08:19:04 -- setup/common.sh@33 -- # return 0 00:22:31.876 08:19:04 -- setup/hugepages.sh@97 -- # anon=0 00:22:31.876 08:19:04 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:22:31.876 08:19:04 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:22:31.876 08:19:04 -- setup/common.sh@18 -- # local node= 00:22:31.876 08:19:04 -- setup/common.sh@19 -- # local var val 00:22:31.876 08:19:04 -- setup/common.sh@20 -- # local mem_f mem 00:22:31.876 08:19:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:22:31.876 08:19:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:22:31.876 08:19:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:22:31.876 08:19:04 -- setup/common.sh@28 -- # mapfile -t mem 00:22:31.876 08:19:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:22:31.876 08:19:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8701320 kB' 'MemAvailable: 10493484 kB' 'Buffers: 2436 kB' 'Cached: 2004740 kB' 'SwapCached: 0 kB' 'Active: 847588 kB' 'Inactive: 1278848 kB' 'Active(anon): 129724 kB' 'Inactive(anon): 0 kB' 'Active(file): 717864 kB' 'Inactive(file): 1278848 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 120832 kB' 'Mapped: 48712 kB' 'Shmem: 10464 kB' 'KReclaimable: 64780 kB' 'Slab: 140256 kB' 'SReclaimable: 64780 kB' 'SUnreclaim: 75476 kB' 'KernelStack: 6384 kB' 'PageTables: 4404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 354952 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55032 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:22:31.876 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.876 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.876 08:19:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.876 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.876 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.876 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.876 08:19:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.876 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.876 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.876 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.876 08:19:04 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.876 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.876 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.876 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.876 08:19:04 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.876 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.876 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.876 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.876 08:19:04 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.876 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.876 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.876 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.876 08:19:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.876 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.876 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.876 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.876 08:19:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.876 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.876 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.876 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.876 08:19:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.876 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.876 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.876 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.876 08:19:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.876 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.876 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.876 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.876 08:19:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.876 08:19:04 -- setup/common.sh@32 -- # continue 00:22:31.876 08:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.876 08:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.876 08:19:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.876 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.876 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.876 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.876 08:19:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.876 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.876 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.876 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.876 08:19:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.876 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.876 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.876 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.876 08:19:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.876 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.876 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.876 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.876 08:19:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.876 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.876 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.876 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.876 08:19:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.876 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.876 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.876 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.876 08:19:05 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.876 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.876 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.876 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.876 08:19:05 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.876 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.876 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.876 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.876 08:19:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.876 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.876 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.876 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.876 08:19:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.876 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.876 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.876 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.876 08:19:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.876 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.876 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.876 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.876 08:19:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.876 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.876 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.876 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.876 08:19:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.876 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.876 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.876 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.876 08:19:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.876 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.876 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.876 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.876 08:19:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.876 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.876 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.876 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.876 08:19:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.876 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.877 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.877 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.877 08:19:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.877 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.877 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.877 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.877 08:19:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.877 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.877 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.877 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.877 08:19:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.877 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.877 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.877 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.877 08:19:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.877 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.877 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.877 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.877 08:19:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.877 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.877 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.877 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.877 08:19:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.877 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.877 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.877 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.877 08:19:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.877 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.877 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.877 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.877 08:19:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.877 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.877 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.877 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.877 08:19:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.877 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.877 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.877 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.877 08:19:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.877 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.877 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.877 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.877 08:19:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.877 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.877 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.877 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.877 08:19:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.877 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.877 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.877 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.877 08:19:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.877 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.877 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.877 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.877 08:19:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.877 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.877 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.877 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.877 08:19:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.877 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.877 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.877 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.877 08:19:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.877 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.877 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.877 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.877 08:19:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.877 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.877 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.877 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.877 08:19:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.877 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.877 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.877 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.877 08:19:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.877 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.877 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.877 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.877 08:19:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.877 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.877 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.877 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.877 08:19:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.877 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.877 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.877 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.877 08:19:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.877 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.877 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.877 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.877 08:19:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.877 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.877 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.877 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.877 08:19:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.877 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.877 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.877 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.877 08:19:05 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.877 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.877 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.877 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.877 08:19:05 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.877 08:19:05 -- setup/common.sh@33 -- # echo 0 00:22:31.877 08:19:05 -- setup/common.sh@33 -- # return 0 00:22:31.877 08:19:05 -- setup/hugepages.sh@99 -- # surp=0 00:22:31.877 08:19:05 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:22:31.877 08:19:05 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:22:31.877 08:19:05 -- setup/common.sh@18 -- # local node= 00:22:31.877 08:19:05 -- setup/common.sh@19 -- # local var val 00:22:31.877 08:19:05 -- setup/common.sh@20 -- # local mem_f mem 00:22:31.877 08:19:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:22:31.877 08:19:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:22:31.877 08:19:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:22:31.877 08:19:05 -- setup/common.sh@28 -- # mapfile -t mem 00:22:31.877 08:19:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:22:31.877 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.877 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.877 08:19:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8701320 kB' 'MemAvailable: 10493484 kB' 'Buffers: 2436 kB' 'Cached: 2004740 kB' 'SwapCached: 0 kB' 'Active: 847468 kB' 'Inactive: 1278848 kB' 'Active(anon): 129604 kB' 'Inactive(anon): 0 kB' 'Active(file): 717864 kB' 'Inactive(file): 1278848 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 120820 kB' 'Mapped: 48712 kB' 'Shmem: 10464 kB' 'KReclaimable: 64780 kB' 'Slab: 140252 kB' 'SReclaimable: 64780 kB' 'SUnreclaim: 75472 kB' 'KernelStack: 6384 kB' 'PageTables: 4420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 354836 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55000 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:22:31.878 08:19:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.878 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.878 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.878 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.878 08:19:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.878 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.878 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.878 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.878 08:19:05 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.878 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.878 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.878 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.878 08:19:05 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.878 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.878 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.878 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.878 08:19:05 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.878 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.878 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.878 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.878 08:19:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.878 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.878 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.878 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.878 08:19:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.878 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.878 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.878 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.878 08:19:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.878 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.878 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.878 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.878 08:19:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.878 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.878 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.878 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.878 08:19:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.878 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.878 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.878 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.878 08:19:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.878 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.878 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.878 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.878 08:19:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.878 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.878 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.878 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.878 08:19:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.878 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.878 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.878 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.878 08:19:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.878 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.878 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.878 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.878 08:19:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.878 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.878 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.878 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.878 08:19:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.878 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.878 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.878 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.878 08:19:05 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.878 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.878 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.878 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.878 08:19:05 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.878 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.878 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.878 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.878 08:19:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.878 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.878 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.878 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.878 08:19:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.878 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.878 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.878 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.878 08:19:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.878 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.878 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.878 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.878 08:19:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.878 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.878 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.878 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.878 08:19:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.878 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.878 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.878 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.878 08:19:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.878 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.878 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.878 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.878 08:19:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.878 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.878 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.878 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.878 08:19:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.878 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.878 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.878 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.878 08:19:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.878 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.878 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.878 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.878 08:19:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.878 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.878 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.878 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.878 08:19:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.878 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.878 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.878 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.878 08:19:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.878 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.878 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.878 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.878 08:19:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.878 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.878 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.878 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.878 08:19:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.878 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.878 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.878 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.878 08:19:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.878 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.878 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.878 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.878 08:19:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.878 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.878 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.878 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.878 08:19:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.878 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.878 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.878 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.878 08:19:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.878 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.878 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.878 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.878 08:19:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.878 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.878 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.878 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.878 08:19:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.878 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.878 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.878 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.878 08:19:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.878 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.878 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.879 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.879 08:19:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.879 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.879 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.879 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.879 08:19:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.879 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.879 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.879 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.879 08:19:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.879 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.879 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.879 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.879 08:19:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.879 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.879 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.879 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.879 08:19:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.879 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.879 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.879 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.879 08:19:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.879 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.879 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.879 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.879 08:19:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.879 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.879 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.879 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.879 08:19:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.879 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.879 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.879 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.879 08:19:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.879 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.879 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.879 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.879 08:19:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.879 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.879 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.879 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.879 08:19:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.879 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.879 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.879 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.879 08:19:05 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:31.879 08:19:05 -- setup/common.sh@33 -- # echo 0 00:22:31.879 08:19:05 -- setup/common.sh@33 -- # return 0 00:22:31.879 08:19:05 -- setup/hugepages.sh@100 -- # resv=0 00:22:31.879 08:19:05 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:22:31.879 nr_hugepages=512 00:22:31.879 resv_hugepages=0 00:22:31.879 08:19:05 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:22:31.879 surplus_hugepages=0 00:22:31.879 08:19:05 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:22:31.879 anon_hugepages=0 00:22:31.879 08:19:05 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:22:31.879 08:19:05 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:22:31.879 08:19:05 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:22:31.879 08:19:05 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:22:31.879 08:19:05 -- setup/common.sh@17 -- # local get=HugePages_Total 00:22:31.879 08:19:05 -- setup/common.sh@18 -- # local node= 00:22:31.879 08:19:05 -- setup/common.sh@19 -- # local var val 00:22:31.879 08:19:05 -- setup/common.sh@20 -- # local mem_f mem 00:22:31.879 08:19:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:22:31.879 08:19:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:22:31.879 08:19:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:22:31.879 08:19:05 -- setup/common.sh@28 -- # mapfile -t mem 00:22:31.879 08:19:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:22:31.879 08:19:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8701716 kB' 'MemAvailable: 10493880 kB' 'Buffers: 2436 kB' 'Cached: 2004740 kB' 'SwapCached: 0 kB' 'Active: 847708 kB' 'Inactive: 1278848 kB' 'Active(anon): 129844 kB' 'Inactive(anon): 0 kB' 'Active(file): 717864 kB' 'Inactive(file): 1278848 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 121036 kB' 'Mapped: 48712 kB' 'Shmem: 10464 kB' 'KReclaimable: 64780 kB' 'Slab: 140220 kB' 'SReclaimable: 64780 kB' 'SUnreclaim: 75440 kB' 'KernelStack: 6368 kB' 'PageTables: 4364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 354836 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55016 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:22:31.879 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.879 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.879 08:19:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.879 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.879 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.879 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.879 08:19:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.879 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.879 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.879 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.879 08:19:05 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.879 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.879 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.879 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.879 08:19:05 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.879 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.879 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.879 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.879 08:19:05 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.879 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.879 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.879 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.879 08:19:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.879 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.879 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.879 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.879 08:19:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.879 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.879 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.879 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.879 08:19:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.879 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.879 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.879 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.879 08:19:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.879 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.879 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.879 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.879 08:19:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.880 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.880 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.880 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.880 08:19:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.880 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.880 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.880 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.880 08:19:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.880 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.880 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.880 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.880 08:19:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.880 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.880 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.880 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.880 08:19:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.880 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.880 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.880 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.880 08:19:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.880 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.880 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.880 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.880 08:19:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.880 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.880 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.880 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.880 08:19:05 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.880 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.880 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.880 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.880 08:19:05 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.880 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.880 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.880 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.880 08:19:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.880 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.880 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.880 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.880 08:19:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.880 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.880 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.880 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.880 08:19:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.880 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.880 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.880 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.880 08:19:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.880 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.880 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.880 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.880 08:19:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.880 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.880 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.880 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.880 08:19:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.880 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.880 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.880 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.880 08:19:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.880 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.880 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.880 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.880 08:19:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.880 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.880 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.880 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.880 08:19:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.880 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.880 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.880 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.880 08:19:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.880 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.880 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.880 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.880 08:19:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.880 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.880 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.880 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.880 08:19:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.880 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.880 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.880 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.880 08:19:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.880 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.880 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.880 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.880 08:19:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.880 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.880 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.880 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.880 08:19:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.880 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.880 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.880 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.880 08:19:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.880 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.880 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.880 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.880 08:19:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.880 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.880 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.880 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.880 08:19:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.880 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.880 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.880 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.880 08:19:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.880 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.880 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.880 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.880 08:19:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.880 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.880 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.880 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.880 08:19:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.880 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.880 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.880 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.880 08:19:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.881 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.881 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.881 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.881 08:19:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.881 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.881 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.881 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.881 08:19:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.881 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.881 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.881 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.881 08:19:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.881 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.881 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.881 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.881 08:19:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.881 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.881 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.881 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.881 08:19:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.881 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.881 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.881 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.881 08:19:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.881 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.881 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.881 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.881 08:19:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.881 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.881 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.881 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.881 08:19:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.881 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.881 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.881 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.881 08:19:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:31.881 08:19:05 -- setup/common.sh@33 -- # echo 512 00:22:31.881 08:19:05 -- setup/common.sh@33 -- # return 0 00:22:31.881 08:19:05 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:22:31.881 08:19:05 -- setup/hugepages.sh@112 -- # get_nodes 00:22:31.881 08:19:05 -- setup/hugepages.sh@27 -- # local node 00:22:31.881 08:19:05 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:22:31.881 08:19:05 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:22:31.881 08:19:05 -- setup/hugepages.sh@32 -- # no_nodes=1 00:22:31.881 08:19:05 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:22:31.881 08:19:05 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:22:31.881 08:19:05 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:22:31.881 08:19:05 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:22:31.881 08:19:05 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:22:31.881 08:19:05 -- setup/common.sh@18 -- # local node=0 00:22:31.881 08:19:05 -- setup/common.sh@19 -- # local var val 00:22:31.881 08:19:05 -- setup/common.sh@20 -- # local mem_f mem 00:22:31.881 08:19:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:22:31.881 08:19:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:22:31.881 08:19:05 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:22:31.881 08:19:05 -- setup/common.sh@28 -- # mapfile -t mem 00:22:31.881 08:19:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:22:31.881 08:19:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8701716 kB' 'MemUsed: 3540260 kB' 'SwapCached: 0 kB' 'Active: 847504 kB' 'Inactive: 1278848 kB' 'Active(anon): 129640 kB' 'Inactive(anon): 0 kB' 'Active(file): 717864 kB' 'Inactive(file): 1278848 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'FilePages: 2007176 kB' 'Mapped: 48712 kB' 'AnonPages: 120832 kB' 'Shmem: 10464 kB' 'KernelStack: 6384 kB' 'PageTables: 4408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 64780 kB' 'Slab: 140220 kB' 'SReclaimable: 64780 kB' 'SUnreclaim: 75440 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:22:31.881 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.881 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.881 08:19:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.881 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.881 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.881 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.881 08:19:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.881 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.881 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.881 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.881 08:19:05 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.881 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.881 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.881 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.881 08:19:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.881 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.881 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.881 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.881 08:19:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.881 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.881 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.881 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.881 08:19:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.881 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.881 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.881 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.881 08:19:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.881 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.881 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.881 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.881 08:19:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.881 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.881 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.881 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.881 08:19:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.881 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.881 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.881 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.881 08:19:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.881 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.881 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.881 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.881 08:19:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.881 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.881 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.881 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.881 08:19:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.881 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.881 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.881 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.881 08:19:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.881 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.881 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.881 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.881 08:19:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.881 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.881 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.881 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.881 08:19:05 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.881 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.881 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.881 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.881 08:19:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.881 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.881 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.881 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.882 08:19:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.882 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.882 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.882 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.882 08:19:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.882 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.882 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.882 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.882 08:19:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.882 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.882 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.882 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.882 08:19:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.882 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.882 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.882 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.882 08:19:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.882 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.882 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.882 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.882 08:19:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.882 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.882 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.882 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.882 08:19:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.882 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.882 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.882 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.882 08:19:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.882 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.882 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.882 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.882 08:19:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.882 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.882 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.882 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.882 08:19:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.882 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.882 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.882 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.882 08:19:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.882 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.882 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.882 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.882 08:19:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.882 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.882 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.882 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.882 08:19:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.882 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.882 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.882 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.882 08:19:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.882 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.882 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.882 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.882 08:19:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.882 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.882 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.882 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.882 08:19:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.882 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.882 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.882 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.882 08:19:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.882 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.882 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.882 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.882 08:19:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.882 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.882 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.882 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.882 08:19:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.882 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.882 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.882 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.882 08:19:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.882 08:19:05 -- setup/common.sh@32 -- # continue 00:22:31.882 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:31.882 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:31.882 08:19:05 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:31.882 08:19:05 -- setup/common.sh@33 -- # echo 0 00:22:31.882 08:19:05 -- setup/common.sh@33 -- # return 0 00:22:31.882 08:19:05 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:22:31.882 08:19:05 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:22:31.882 08:19:05 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:22:31.882 08:19:05 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:22:31.882 08:19:05 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:22:31.882 node0=512 expecting 512 00:22:31.882 08:19:05 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:22:31.882 00:22:31.882 real 0m0.667s 00:22:31.882 user 0m0.290s 00:22:31.882 sys 0m0.416s 00:22:31.882 08:19:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:31.882 08:19:05 -- common/autotest_common.sh@10 -- # set +x 00:22:31.882 ************************************ 00:22:31.882 END TEST custom_alloc 00:22:31.882 ************************************ 00:22:31.882 08:19:05 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:22:31.882 08:19:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:31.882 08:19:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:31.882 08:19:05 -- common/autotest_common.sh@10 -- # set +x 00:22:31.882 ************************************ 00:22:31.882 START TEST no_shrink_alloc 00:22:31.882 ************************************ 00:22:31.882 08:19:05 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:22:31.882 08:19:05 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:22:31.882 08:19:05 -- setup/hugepages.sh@49 -- # local size=2097152 00:22:31.882 08:19:05 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:22:31.882 08:19:05 -- setup/hugepages.sh@51 -- # shift 00:22:31.882 08:19:05 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:22:31.882 08:19:05 -- setup/hugepages.sh@52 -- # local node_ids 00:22:31.882 08:19:05 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:22:31.882 08:19:05 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:22:31.882 08:19:05 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:22:31.882 08:19:05 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:22:31.882 08:19:05 -- setup/hugepages.sh@62 -- # local user_nodes 00:22:31.882 08:19:05 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:22:31.882 08:19:05 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:22:31.882 08:19:05 -- setup/hugepages.sh@67 -- # nodes_test=() 00:22:31.882 08:19:05 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:22:31.882 08:19:05 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:22:31.882 08:19:05 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:22:31.882 08:19:05 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:22:31.882 08:19:05 -- setup/hugepages.sh@73 -- # return 0 00:22:31.882 08:19:05 -- setup/hugepages.sh@198 -- # setup output 00:22:31.882 08:19:05 -- setup/common.sh@9 -- # [[ output == output ]] 00:22:31.882 08:19:05 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:32.454 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:32.454 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:32.454 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:32.454 08:19:05 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:22:32.454 08:19:05 -- setup/hugepages.sh@89 -- # local node 00:22:32.454 08:19:05 -- setup/hugepages.sh@90 -- # local sorted_t 00:22:32.454 08:19:05 -- setup/hugepages.sh@91 -- # local sorted_s 00:22:32.454 08:19:05 -- setup/hugepages.sh@92 -- # local surp 00:22:32.454 08:19:05 -- setup/hugepages.sh@93 -- # local resv 00:22:32.454 08:19:05 -- setup/hugepages.sh@94 -- # local anon 00:22:32.454 08:19:05 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:22:32.454 08:19:05 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:22:32.454 08:19:05 -- setup/common.sh@17 -- # local get=AnonHugePages 00:22:32.454 08:19:05 -- setup/common.sh@18 -- # local node= 00:22:32.454 08:19:05 -- setup/common.sh@19 -- # local var val 00:22:32.454 08:19:05 -- setup/common.sh@20 -- # local mem_f mem 00:22:32.454 08:19:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:22:32.454 08:19:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:22:32.454 08:19:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:22:32.454 08:19:05 -- setup/common.sh@28 -- # mapfile -t mem 00:22:32.454 08:19:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:22:32.454 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.454 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.454 08:19:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7662000 kB' 'MemAvailable: 9454156 kB' 'Buffers: 2436 kB' 'Cached: 2004740 kB' 'SwapCached: 0 kB' 'Active: 842672 kB' 'Inactive: 1278848 kB' 'Active(anon): 124808 kB' 'Inactive(anon): 0 kB' 'Active(file): 717864 kB' 'Inactive(file): 1278848 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 115700 kB' 'Mapped: 48084 kB' 'Shmem: 10464 kB' 'KReclaimable: 64768 kB' 'Slab: 140160 kB' 'SReclaimable: 64768 kB' 'SUnreclaim: 75392 kB' 'KernelStack: 6248 kB' 'PageTables: 3716 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 334276 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54888 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:22:32.454 08:19:05 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:32.454 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.454 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.454 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.454 08:19:05 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:32.454 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.454 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.454 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.454 08:19:05 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:32.454 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.454 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.454 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.454 08:19:05 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:32.454 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.454 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.454 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.454 08:19:05 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:32.454 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.454 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.454 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.454 08:19:05 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:32.454 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.454 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.454 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.454 08:19:05 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:32.454 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.454 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.454 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.454 08:19:05 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:32.454 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.454 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.454 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.454 08:19:05 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:32.454 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.454 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.454 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.454 08:19:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:32.454 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.454 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.454 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.454 08:19:05 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:32.454 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.454 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.454 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.454 08:19:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:32.454 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.454 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.454 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.454 08:19:05 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:32.454 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.454 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.454 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.454 08:19:05 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:32.454 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.454 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.454 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.455 08:19:05 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:32.455 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.455 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.455 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.455 08:19:05 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:32.455 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.455 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.455 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.455 08:19:05 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:32.455 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.455 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.455 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.455 08:19:05 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:32.455 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.455 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.455 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.455 08:19:05 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:32.455 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.455 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.455 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.455 08:19:05 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:32.455 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.455 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.455 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.455 08:19:05 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:32.455 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.455 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.455 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.455 08:19:05 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:32.455 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.455 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.455 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.455 08:19:05 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:32.455 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.455 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.455 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.455 08:19:05 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:32.455 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.455 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.455 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.455 08:19:05 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:32.455 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.455 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.455 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.455 08:19:05 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:32.455 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.455 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.455 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.455 08:19:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:32.455 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.455 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.455 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.455 08:19:05 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:32.455 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.455 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.455 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.455 08:19:05 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:32.455 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.455 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.455 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.455 08:19:05 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:32.455 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.455 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.455 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.455 08:19:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:32.455 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.455 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.455 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.455 08:19:05 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:32.455 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.455 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.455 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.455 08:19:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:32.455 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.455 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.455 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.455 08:19:05 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:32.455 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.455 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.455 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.455 08:19:05 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:32.455 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.455 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.455 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.455 08:19:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:32.455 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.455 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.455 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.455 08:19:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:32.455 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.455 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.455 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.455 08:19:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:32.455 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.455 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.455 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.455 08:19:05 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:32.455 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.455 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.455 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.455 08:19:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:32.455 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.455 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.455 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.455 08:19:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:32.455 08:19:05 -- setup/common.sh@33 -- # echo 0 00:22:32.455 08:19:05 -- setup/common.sh@33 -- # return 0 00:22:32.455 08:19:05 -- setup/hugepages.sh@97 -- # anon=0 00:22:32.455 08:19:05 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:22:32.455 08:19:05 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:22:32.455 08:19:05 -- setup/common.sh@18 -- # local node= 00:22:32.455 08:19:05 -- setup/common.sh@19 -- # local var val 00:22:32.455 08:19:05 -- setup/common.sh@20 -- # local mem_f mem 00:22:32.455 08:19:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:22:32.455 08:19:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:22:32.455 08:19:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:22:32.455 08:19:05 -- setup/common.sh@28 -- # mapfile -t mem 00:22:32.455 08:19:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:22:32.455 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.455 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.455 08:19:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7662276 kB' 'MemAvailable: 9454432 kB' 'Buffers: 2436 kB' 'Cached: 2004740 kB' 'SwapCached: 0 kB' 'Active: 842224 kB' 'Inactive: 1278848 kB' 'Active(anon): 124360 kB' 'Inactive(anon): 0 kB' 'Active(file): 717864 kB' 'Inactive(file): 1278848 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 115572 kB' 'Mapped: 47972 kB' 'Shmem: 10464 kB' 'KReclaimable: 64768 kB' 'Slab: 140112 kB' 'SReclaimable: 64768 kB' 'SUnreclaim: 75344 kB' 'KernelStack: 6272 kB' 'PageTables: 3868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 334644 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54888 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:22:32.455 08:19:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.455 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.455 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.455 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.455 08:19:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.455 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.455 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.455 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.455 08:19:05 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.455 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.455 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.455 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.455 08:19:05 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.455 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.455 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.455 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.455 08:19:05 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.455 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.455 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.455 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.455 08:19:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.455 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.455 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.455 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.455 08:19:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.455 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.455 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.455 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.456 08:19:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.456 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.456 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.456 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.456 08:19:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.456 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.456 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.456 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.456 08:19:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.456 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.456 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.456 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.456 08:19:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.456 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.456 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.456 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.456 08:19:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.456 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.456 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.456 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.456 08:19:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.456 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.456 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.456 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.456 08:19:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.456 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.456 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.456 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.456 08:19:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.456 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.456 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.456 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.456 08:19:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.456 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.456 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.456 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.456 08:19:05 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.456 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.456 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.456 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.456 08:19:05 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.456 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.456 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.456 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.456 08:19:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.456 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.456 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.456 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.456 08:19:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.456 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.456 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.456 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.456 08:19:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.456 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.456 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.456 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.456 08:19:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.456 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.456 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.456 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.456 08:19:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.456 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.456 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.456 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.456 08:19:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.456 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.456 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.456 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.456 08:19:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.456 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.456 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.456 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.456 08:19:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.456 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.456 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.456 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.456 08:19:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.456 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.456 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.456 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.456 08:19:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.456 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.456 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.456 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.456 08:19:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.456 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.456 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.456 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.456 08:19:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.456 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.456 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.456 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.456 08:19:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.456 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.456 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.456 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.456 08:19:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.456 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.456 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.456 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.456 08:19:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.456 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.456 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.456 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.456 08:19:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.456 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.456 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.456 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.456 08:19:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.456 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.456 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.456 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.456 08:19:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.456 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.456 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.456 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.456 08:19:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.456 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.456 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.456 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.456 08:19:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.456 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.456 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.456 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.456 08:19:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.456 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.456 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.456 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.456 08:19:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.456 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.456 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.456 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.456 08:19:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.456 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.456 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.456 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.456 08:19:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.456 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.456 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.456 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.456 08:19:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.456 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.456 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.456 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.456 08:19:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.456 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.456 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.456 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.456 08:19:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.456 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.456 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.456 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.456 08:19:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.456 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.456 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.456 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.456 08:19:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.456 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.456 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.456 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.456 08:19:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.456 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.456 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.456 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.456 08:19:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.456 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.456 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.457 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.457 08:19:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.457 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.457 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.457 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.457 08:19:05 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.457 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.457 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.457 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.457 08:19:05 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.457 08:19:05 -- setup/common.sh@33 -- # echo 0 00:22:32.457 08:19:05 -- setup/common.sh@33 -- # return 0 00:22:32.457 08:19:05 -- setup/hugepages.sh@99 -- # surp=0 00:22:32.457 08:19:05 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:22:32.457 08:19:05 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:22:32.457 08:19:05 -- setup/common.sh@18 -- # local node= 00:22:32.457 08:19:05 -- setup/common.sh@19 -- # local var val 00:22:32.457 08:19:05 -- setup/common.sh@20 -- # local mem_f mem 00:22:32.457 08:19:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:22:32.457 08:19:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:22:32.457 08:19:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:22:32.457 08:19:05 -- setup/common.sh@28 -- # mapfile -t mem 00:22:32.457 08:19:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:22:32.457 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.457 08:19:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7662276 kB' 'MemAvailable: 9454432 kB' 'Buffers: 2436 kB' 'Cached: 2004740 kB' 'SwapCached: 0 kB' 'Active: 842504 kB' 'Inactive: 1278848 kB' 'Active(anon): 124640 kB' 'Inactive(anon): 0 kB' 'Active(file): 717864 kB' 'Inactive(file): 1278848 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 115844 kB' 'Mapped: 47972 kB' 'Shmem: 10464 kB' 'KReclaimable: 64768 kB' 'Slab: 140104 kB' 'SReclaimable: 64768 kB' 'SUnreclaim: 75336 kB' 'KernelStack: 6272 kB' 'PageTables: 3868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 334644 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54888 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:22:32.457 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.457 08:19:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:32.457 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.457 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.457 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.457 08:19:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:32.457 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.457 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.457 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.457 08:19:05 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:32.457 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.457 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.457 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.457 08:19:05 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:32.457 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.457 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.457 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.457 08:19:05 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:32.457 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.457 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.457 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.457 08:19:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:32.457 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.457 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.457 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.457 08:19:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:32.457 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.457 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.457 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.457 08:19:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:32.457 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.457 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.457 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.457 08:19:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:32.457 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.457 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.457 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.457 08:19:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:32.457 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.457 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.457 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.457 08:19:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:32.457 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.457 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.457 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.457 08:19:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:32.457 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.457 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.457 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.457 08:19:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:32.457 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.457 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.457 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.457 08:19:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:32.457 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.457 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.457 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.457 08:19:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:32.457 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.457 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.457 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.457 08:19:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:32.457 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.457 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.457 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.457 08:19:05 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:32.457 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.457 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.457 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.457 08:19:05 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:32.457 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.457 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.457 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.457 08:19:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:32.457 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.457 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.457 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.457 08:19:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:32.457 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.457 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.457 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.457 08:19:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:32.457 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.457 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.457 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.457 08:19:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:32.457 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.457 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.457 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.457 08:19:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:32.457 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.457 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.457 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.457 08:19:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:32.457 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.457 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.457 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.457 08:19:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:32.457 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.457 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.457 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.457 08:19:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:32.457 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.457 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.457 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.457 08:19:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:32.457 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.457 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.457 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.457 08:19:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:32.457 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.457 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.457 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.457 08:19:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:32.457 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.457 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.457 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.457 08:19:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:32.457 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.457 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.457 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.457 08:19:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:32.457 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.458 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.458 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.458 08:19:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:32.458 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.458 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.458 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.458 08:19:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:32.458 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.458 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.458 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.458 08:19:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:32.458 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.458 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.458 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.458 08:19:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:32.458 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.458 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.458 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.458 08:19:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:32.458 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.458 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.458 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.458 08:19:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:32.458 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.458 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.458 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.458 08:19:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:32.458 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.458 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.458 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.458 08:19:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:32.458 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.458 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.458 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.458 08:19:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:32.458 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.458 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.458 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.458 08:19:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:32.458 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.458 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.458 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.458 08:19:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:32.458 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.458 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.458 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.458 08:19:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:32.458 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.458 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.458 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.458 08:19:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:32.458 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.458 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.458 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.458 08:19:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:32.720 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.720 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.720 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.720 08:19:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:32.720 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.720 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.720 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.720 08:19:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:32.720 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.720 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.720 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.720 08:19:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:32.720 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.720 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.720 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.720 08:19:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:32.720 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.720 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.720 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.720 08:19:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:32.720 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.720 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.720 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.720 08:19:05 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:32.720 08:19:05 -- setup/common.sh@33 -- # echo 0 00:22:32.720 08:19:05 -- setup/common.sh@33 -- # return 0 00:22:32.720 08:19:05 -- setup/hugepages.sh@100 -- # resv=0 00:22:32.720 nr_hugepages=1024 00:22:32.720 08:19:05 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:22:32.720 resv_hugepages=0 00:22:32.720 08:19:05 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:22:32.720 surplus_hugepages=0 00:22:32.720 08:19:05 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:22:32.720 anon_hugepages=0 00:22:32.720 08:19:05 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:22:32.720 08:19:05 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:22:32.720 08:19:05 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:22:32.720 08:19:05 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:22:32.720 08:19:05 -- setup/common.sh@17 -- # local get=HugePages_Total 00:22:32.720 08:19:05 -- setup/common.sh@18 -- # local node= 00:22:32.720 08:19:05 -- setup/common.sh@19 -- # local var val 00:22:32.720 08:19:05 -- setup/common.sh@20 -- # local mem_f mem 00:22:32.720 08:19:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:22:32.720 08:19:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:22:32.720 08:19:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:22:32.720 08:19:05 -- setup/common.sh@28 -- # mapfile -t mem 00:22:32.720 08:19:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:22:32.720 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.721 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.721 08:19:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7662276 kB' 'MemAvailable: 9454432 kB' 'Buffers: 2436 kB' 'Cached: 2004740 kB' 'SwapCached: 0 kB' 'Active: 842428 kB' 'Inactive: 1278848 kB' 'Active(anon): 124564 kB' 'Inactive(anon): 0 kB' 'Active(file): 717864 kB' 'Inactive(file): 1278848 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 115752 kB' 'Mapped: 47972 kB' 'Shmem: 10464 kB' 'KReclaimable: 64768 kB' 'Slab: 140084 kB' 'SReclaimable: 64768 kB' 'SUnreclaim: 75316 kB' 'KernelStack: 6256 kB' 'PageTables: 3820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 334644 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54888 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:22:32.721 08:19:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:32.721 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.721 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.721 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.721 08:19:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:32.721 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.721 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.721 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.721 08:19:05 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:32.721 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.721 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.721 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.721 08:19:05 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:32.721 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.721 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.721 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.721 08:19:05 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:32.721 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.721 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.721 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.721 08:19:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:32.721 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.721 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.721 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.721 08:19:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:32.721 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.721 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.721 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.721 08:19:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:32.721 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.721 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.721 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.721 08:19:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:32.721 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.721 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.721 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.721 08:19:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:32.721 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.721 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.721 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.721 08:19:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:32.721 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.721 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.721 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.721 08:19:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:32.721 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.721 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.721 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.721 08:19:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:32.721 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.721 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.721 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.721 08:19:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:32.721 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.721 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.721 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.721 08:19:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:32.721 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.721 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.721 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.721 08:19:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:32.721 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.721 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.721 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.721 08:19:05 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:32.721 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.721 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.721 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.721 08:19:05 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:32.721 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.721 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.721 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.721 08:19:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:32.721 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.721 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.721 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.721 08:19:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:32.721 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.721 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.721 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.721 08:19:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:32.721 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.721 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.721 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.721 08:19:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:32.721 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.721 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.721 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.721 08:19:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:32.721 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.721 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.721 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.721 08:19:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:32.721 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.721 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.721 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.721 08:19:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:32.721 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.721 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.721 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.721 08:19:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:32.721 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.721 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.721 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.721 08:19:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:32.721 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.721 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.721 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.721 08:19:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:32.721 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.721 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.721 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.721 08:19:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:32.721 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.721 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.721 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.721 08:19:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:32.721 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.721 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.721 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.721 08:19:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:32.721 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.721 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.721 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.721 08:19:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:32.721 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.721 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.721 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.721 08:19:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:32.721 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.721 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.721 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.721 08:19:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:32.721 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.721 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.721 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.721 08:19:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:32.721 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.721 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.721 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.721 08:19:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:32.721 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.721 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.721 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.722 08:19:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:32.722 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.722 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.722 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.722 08:19:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:32.722 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.722 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.722 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.722 08:19:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:32.722 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.722 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.722 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.722 08:19:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:32.722 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.722 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.722 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.722 08:19:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:32.722 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.722 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.722 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.722 08:19:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:32.722 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.722 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.722 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.722 08:19:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:32.722 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.722 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.722 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.722 08:19:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:32.722 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.722 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.722 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.722 08:19:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:32.722 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.722 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.722 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.722 08:19:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:32.722 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.722 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.722 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.722 08:19:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:32.722 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.722 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.722 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.722 08:19:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:32.722 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.722 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.722 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.722 08:19:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:32.722 08:19:05 -- setup/common.sh@33 -- # echo 1024 00:22:32.722 08:19:05 -- setup/common.sh@33 -- # return 0 00:22:32.722 08:19:05 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:22:32.722 08:19:05 -- setup/hugepages.sh@112 -- # get_nodes 00:22:32.722 08:19:05 -- setup/hugepages.sh@27 -- # local node 00:22:32.722 08:19:05 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:22:32.722 08:19:05 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:22:32.722 08:19:05 -- setup/hugepages.sh@32 -- # no_nodes=1 00:22:32.722 08:19:05 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:22:32.722 08:19:05 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:22:32.722 08:19:05 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:22:32.722 08:19:05 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:22:32.722 08:19:05 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:22:32.722 08:19:05 -- setup/common.sh@18 -- # local node=0 00:22:32.722 08:19:05 -- setup/common.sh@19 -- # local var val 00:22:32.722 08:19:05 -- setup/common.sh@20 -- # local mem_f mem 00:22:32.722 08:19:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:22:32.722 08:19:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:22:32.722 08:19:05 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:22:32.722 08:19:05 -- setup/common.sh@28 -- # mapfile -t mem 00:22:32.722 08:19:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:22:32.722 08:19:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7662276 kB' 'MemUsed: 4579700 kB' 'SwapCached: 0 kB' 'Active: 842408 kB' 'Inactive: 1278848 kB' 'Active(anon): 124544 kB' 'Inactive(anon): 0 kB' 'Active(file): 717864 kB' 'Inactive(file): 1278848 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'FilePages: 2007176 kB' 'Mapped: 47972 kB' 'AnonPages: 115728 kB' 'Shmem: 10464 kB' 'KernelStack: 6256 kB' 'PageTables: 3820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 64768 kB' 'Slab: 140084 kB' 'SReclaimable: 64768 kB' 'SUnreclaim: 75316 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:22:32.722 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.722 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.722 08:19:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.722 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.722 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.722 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.722 08:19:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.722 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.722 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.722 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.722 08:19:05 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.722 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.722 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.722 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.722 08:19:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.722 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.722 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.722 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.722 08:19:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.722 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.722 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.722 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.722 08:19:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.722 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.722 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.722 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.722 08:19:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.722 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.722 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.722 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.722 08:19:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.722 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.722 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.722 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.722 08:19:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.722 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.722 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.722 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.722 08:19:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.722 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.722 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.722 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.722 08:19:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.722 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.722 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.722 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.722 08:19:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.722 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.722 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.722 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.722 08:19:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.722 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.722 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.722 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.722 08:19:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.722 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.722 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.722 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.722 08:19:05 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.722 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.722 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.722 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.722 08:19:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.722 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.722 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.722 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.722 08:19:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.722 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.722 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.722 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.722 08:19:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.722 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.722 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.722 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.722 08:19:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.722 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.722 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.722 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.722 08:19:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.722 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.722 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.723 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.723 08:19:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.723 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.723 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.723 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.723 08:19:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.723 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.723 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.723 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.723 08:19:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.723 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.723 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.723 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.723 08:19:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.723 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.723 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.723 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.723 08:19:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.723 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.723 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.723 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.723 08:19:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.723 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.723 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.723 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.723 08:19:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.723 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.723 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.723 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.723 08:19:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.723 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.723 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.723 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.723 08:19:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.723 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.723 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.723 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.723 08:19:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.723 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.723 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.723 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.723 08:19:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.723 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.723 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.723 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.723 08:19:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.723 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.723 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.723 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.723 08:19:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.723 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.723 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.723 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.723 08:19:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.723 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.723 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.723 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.723 08:19:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.723 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.723 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.723 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.723 08:19:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.723 08:19:05 -- setup/common.sh@32 -- # continue 00:22:32.723 08:19:05 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.723 08:19:05 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.723 08:19:05 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.723 08:19:05 -- setup/common.sh@33 -- # echo 0 00:22:32.723 08:19:05 -- setup/common.sh@33 -- # return 0 00:22:32.723 08:19:05 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:22:32.723 08:19:05 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:22:32.723 08:19:05 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:22:32.723 08:19:05 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:22:32.723 08:19:05 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:22:32.723 node0=1024 expecting 1024 00:22:32.723 08:19:05 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:22:32.723 08:19:05 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:22:32.723 08:19:05 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:22:32.723 08:19:05 -- setup/hugepages.sh@202 -- # setup output 00:22:32.723 08:19:05 -- setup/common.sh@9 -- # [[ output == output ]] 00:22:32.723 08:19:05 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:32.985 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:32.985 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:32.985 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:32.985 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:22:32.985 08:19:06 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:22:32.985 08:19:06 -- setup/hugepages.sh@89 -- # local node 00:22:32.985 08:19:06 -- setup/hugepages.sh@90 -- # local sorted_t 00:22:32.985 08:19:06 -- setup/hugepages.sh@91 -- # local sorted_s 00:22:32.985 08:19:06 -- setup/hugepages.sh@92 -- # local surp 00:22:32.985 08:19:06 -- setup/hugepages.sh@93 -- # local resv 00:22:32.985 08:19:06 -- setup/hugepages.sh@94 -- # local anon 00:22:32.985 08:19:06 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:22:32.985 08:19:06 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:22:32.985 08:19:06 -- setup/common.sh@17 -- # local get=AnonHugePages 00:22:32.985 08:19:06 -- setup/common.sh@18 -- # local node= 00:22:32.985 08:19:06 -- setup/common.sh@19 -- # local var val 00:22:32.985 08:19:06 -- setup/common.sh@20 -- # local mem_f mem 00:22:32.985 08:19:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:22:32.985 08:19:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:22:32.985 08:19:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:22:32.985 08:19:06 -- setup/common.sh@28 -- # mapfile -t mem 00:22:32.985 08:19:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:22:32.985 08:19:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7668088 kB' 'MemAvailable: 9460244 kB' 'Buffers: 2436 kB' 'Cached: 2004740 kB' 'SwapCached: 0 kB' 'Active: 842916 kB' 'Inactive: 1278848 kB' 'Active(anon): 125052 kB' 'Inactive(anon): 0 kB' 'Active(file): 717864 kB' 'Inactive(file): 1278848 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 116184 kB' 'Mapped: 48204 kB' 'Shmem: 10464 kB' 'KReclaimable: 64768 kB' 'Slab: 139948 kB' 'SReclaimable: 64768 kB' 'SUnreclaim: 75180 kB' 'KernelStack: 6328 kB' 'PageTables: 3908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 334644 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54952 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:22:32.985 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.985 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.985 08:19:06 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:32.985 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.985 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.985 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.985 08:19:06 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:32.985 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.985 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.985 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.985 08:19:06 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:32.985 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.985 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.985 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.985 08:19:06 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:32.985 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.985 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.985 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.985 08:19:06 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:32.985 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.985 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.985 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.985 08:19:06 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:32.985 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.985 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.985 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.985 08:19:06 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:32.985 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.985 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.985 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.985 08:19:06 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:32.985 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.985 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.985 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.985 08:19:06 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:32.985 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.985 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.985 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.985 08:19:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:32.985 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.985 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.985 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.985 08:19:06 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:32.985 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.985 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.985 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.985 08:19:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:32.985 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.985 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.985 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.985 08:19:06 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:32.985 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.985 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.985 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.985 08:19:06 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:32.985 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.985 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.985 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.985 08:19:06 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:32.985 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.985 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.985 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.985 08:19:06 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:32.985 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.985 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.985 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.985 08:19:06 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:32.985 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.985 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.985 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.985 08:19:06 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:32.985 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.985 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.985 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.985 08:19:06 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:32.985 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.985 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.985 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.985 08:19:06 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:32.985 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.985 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.985 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.985 08:19:06 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:32.985 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.986 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.986 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.986 08:19:06 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:32.986 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.986 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.986 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.986 08:19:06 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:32.986 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.986 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.986 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.986 08:19:06 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:32.986 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.986 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.986 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.986 08:19:06 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:32.986 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.986 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.986 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.986 08:19:06 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:32.986 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.986 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.986 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.986 08:19:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:32.986 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.986 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.986 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.986 08:19:06 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:32.986 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.986 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.986 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.986 08:19:06 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:32.986 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.986 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.986 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.986 08:19:06 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:32.986 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.986 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.986 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.986 08:19:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:32.986 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.986 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.986 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.986 08:19:06 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:32.986 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.986 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.986 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.986 08:19:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:32.986 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.986 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.986 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.986 08:19:06 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:32.986 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.986 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.986 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.986 08:19:06 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:32.986 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.986 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.986 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.986 08:19:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:32.986 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.986 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.986 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.986 08:19:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:32.986 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.986 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.986 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.986 08:19:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:32.986 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.986 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.986 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.986 08:19:06 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:32.986 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.986 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.986 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.986 08:19:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:32.986 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.986 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.986 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.986 08:19:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:32.986 08:19:06 -- setup/common.sh@33 -- # echo 0 00:22:32.986 08:19:06 -- setup/common.sh@33 -- # return 0 00:22:32.986 08:19:06 -- setup/hugepages.sh@97 -- # anon=0 00:22:32.986 08:19:06 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:22:32.986 08:19:06 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:22:32.986 08:19:06 -- setup/common.sh@18 -- # local node= 00:22:32.986 08:19:06 -- setup/common.sh@19 -- # local var val 00:22:32.986 08:19:06 -- setup/common.sh@20 -- # local mem_f mem 00:22:32.986 08:19:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:22:32.986 08:19:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:22:32.986 08:19:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:22:32.986 08:19:06 -- setup/common.sh@28 -- # mapfile -t mem 00:22:32.986 08:19:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:22:32.986 08:19:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7668088 kB' 'MemAvailable: 9460244 kB' 'Buffers: 2436 kB' 'Cached: 2004740 kB' 'SwapCached: 0 kB' 'Active: 842364 kB' 'Inactive: 1278848 kB' 'Active(anon): 124500 kB' 'Inactive(anon): 0 kB' 'Active(file): 717864 kB' 'Inactive(file): 1278848 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 115608 kB' 'Mapped: 48096 kB' 'Shmem: 10464 kB' 'KReclaimable: 64768 kB' 'Slab: 139952 kB' 'SReclaimable: 64768 kB' 'SUnreclaim: 75184 kB' 'KernelStack: 6232 kB' 'PageTables: 3636 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 334644 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54904 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:22:32.986 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.986 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.986 08:19:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.986 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.986 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.986 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.986 08:19:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.986 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.986 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.986 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.986 08:19:06 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.986 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.986 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.986 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.986 08:19:06 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.986 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.986 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.986 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.986 08:19:06 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.986 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.986 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.986 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.986 08:19:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.986 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.986 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.986 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.986 08:19:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.986 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.986 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.986 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.986 08:19:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.986 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.986 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.986 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.986 08:19:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.986 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.986 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.986 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.986 08:19:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.986 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.986 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.986 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.986 08:19:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.986 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.986 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.986 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.986 08:19:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.986 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.986 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.986 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.986 08:19:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.986 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.986 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.986 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.986 08:19:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.986 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.987 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.987 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.987 08:19:06 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.987 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.987 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.987 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.987 08:19:06 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.987 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.987 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.987 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.987 08:19:06 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.987 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.987 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.987 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.987 08:19:06 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.987 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.987 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.987 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.987 08:19:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.987 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.987 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.987 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.987 08:19:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.987 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.987 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.987 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.987 08:19:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.987 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.987 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.987 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.987 08:19:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.987 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.987 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.987 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.987 08:19:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.987 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.987 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.987 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.987 08:19:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.987 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.987 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.987 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.987 08:19:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.987 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.987 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.987 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.987 08:19:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.987 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.987 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.987 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.987 08:19:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.987 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.987 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.987 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.987 08:19:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.987 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.987 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.987 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.987 08:19:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.987 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.987 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.987 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.987 08:19:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.987 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.987 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.987 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.987 08:19:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.987 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.987 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.987 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.987 08:19:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.987 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.987 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.987 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.987 08:19:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.987 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.987 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.987 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.987 08:19:06 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.987 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.987 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.987 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.987 08:19:06 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.987 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.987 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.987 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.987 08:19:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.987 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.987 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.987 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.987 08:19:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.987 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.987 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.987 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.987 08:19:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.987 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.987 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.987 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.987 08:19:06 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.987 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.987 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.987 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.987 08:19:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.987 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.987 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.987 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.987 08:19:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.987 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.987 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.987 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.987 08:19:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.987 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.987 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.987 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.987 08:19:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.987 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.987 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.987 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.987 08:19:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.987 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.987 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.987 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.987 08:19:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.987 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.987 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.987 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.987 08:19:06 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.987 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.987 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.987 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.987 08:19:06 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.987 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.987 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.987 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.987 08:19:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.987 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.987 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.987 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.987 08:19:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.987 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.987 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.987 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.987 08:19:06 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.987 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.987 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.987 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.987 08:19:06 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.987 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.987 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.987 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.987 08:19:06 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:32.987 08:19:06 -- setup/common.sh@33 -- # echo 0 00:22:32.987 08:19:06 -- setup/common.sh@33 -- # return 0 00:22:32.987 08:19:06 -- setup/hugepages.sh@99 -- # surp=0 00:22:32.987 08:19:06 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:22:32.987 08:19:06 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:22:32.987 08:19:06 -- setup/common.sh@18 -- # local node= 00:22:32.987 08:19:06 -- setup/common.sh@19 -- # local var val 00:22:32.987 08:19:06 -- setup/common.sh@20 -- # local mem_f mem 00:22:32.987 08:19:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:22:32.987 08:19:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:22:32.987 08:19:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:22:32.987 08:19:06 -- setup/common.sh@28 -- # mapfile -t mem 00:22:32.987 08:19:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:22:32.987 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.987 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.988 08:19:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7668096 kB' 'MemAvailable: 9460252 kB' 'Buffers: 2436 kB' 'Cached: 2004740 kB' 'SwapCached: 0 kB' 'Active: 842476 kB' 'Inactive: 1278848 kB' 'Active(anon): 124612 kB' 'Inactive(anon): 0 kB' 'Active(file): 717864 kB' 'Inactive(file): 1278848 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 115760 kB' 'Mapped: 48088 kB' 'Shmem: 10464 kB' 'KReclaimable: 64768 kB' 'Slab: 139956 kB' 'SReclaimable: 64768 kB' 'SUnreclaim: 75188 kB' 'KernelStack: 6232 kB' 'PageTables: 3632 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 334644 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54904 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:22:32.988 08:19:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:32.988 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.988 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.988 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.988 08:19:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:32.988 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.988 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.988 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.988 08:19:06 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:32.988 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.988 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.988 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.988 08:19:06 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:32.988 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.988 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.988 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.988 08:19:06 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:32.988 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.988 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.988 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.988 08:19:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:32.988 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.988 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.988 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.988 08:19:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:32.988 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.988 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.988 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.988 08:19:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:32.988 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.988 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.988 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.988 08:19:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:32.988 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.988 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.988 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.988 08:19:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:32.988 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.988 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.988 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.988 08:19:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:32.988 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.988 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.988 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.988 08:19:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:32.988 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.988 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.988 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.988 08:19:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:32.988 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.988 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.988 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.988 08:19:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:32.988 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.988 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.988 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.988 08:19:06 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:32.988 08:19:06 -- setup/common.sh@32 -- # continue 00:22:32.988 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:32.988 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:32.988 08:19:06 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:33.250 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.250 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.250 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.250 08:19:06 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:33.250 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.250 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.250 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.250 08:19:06 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:33.250 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.250 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.250 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.250 08:19:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:33.250 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.250 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.250 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.250 08:19:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:33.250 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.250 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.250 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.250 08:19:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:33.250 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.250 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.250 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.250 08:19:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:33.250 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.250 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.250 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.250 08:19:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:33.250 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.250 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.250 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.250 08:19:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:33.250 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.250 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.250 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.250 08:19:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:33.250 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.250 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.250 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.250 08:19:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:33.250 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.250 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.250 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.250 08:19:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:33.250 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.250 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.250 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.250 08:19:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:33.250 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.250 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.250 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.250 08:19:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:33.250 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.250 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.250 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.250 08:19:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:33.250 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.250 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.250 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.250 08:19:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:33.250 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.250 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.250 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.250 08:19:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:33.250 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.250 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.250 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.250 08:19:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:33.250 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.250 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.250 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.250 08:19:06 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:33.250 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.250 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.250 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.250 08:19:06 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:33.250 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.250 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.250 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.250 08:19:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:33.250 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.250 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.250 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.250 08:19:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:33.250 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.250 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.250 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.250 08:19:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:33.250 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.250 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.250 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.250 08:19:06 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:33.250 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.250 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.250 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.250 08:19:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:33.250 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.250 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.250 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.250 08:19:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:33.250 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.250 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.250 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.250 08:19:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:33.250 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.250 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.250 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.250 08:19:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:33.250 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.250 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.250 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.251 08:19:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:33.251 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.251 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.251 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.251 08:19:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:33.251 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.251 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.251 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.251 08:19:06 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:33.251 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.251 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.251 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.251 08:19:06 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:33.251 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.251 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.251 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.251 08:19:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:33.251 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.251 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.251 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.251 08:19:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:33.251 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.251 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.251 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.251 08:19:06 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:33.251 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.251 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.251 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.251 08:19:06 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:33.251 08:19:06 -- setup/common.sh@33 -- # echo 0 00:22:33.251 08:19:06 -- setup/common.sh@33 -- # return 0 00:22:33.251 08:19:06 -- setup/hugepages.sh@100 -- # resv=0 00:22:33.251 nr_hugepages=1024 00:22:33.251 08:19:06 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:22:33.251 resv_hugepages=0 00:22:33.251 08:19:06 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:22:33.251 surplus_hugepages=0 00:22:33.251 08:19:06 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:22:33.251 anon_hugepages=0 00:22:33.251 08:19:06 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:22:33.251 08:19:06 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:22:33.251 08:19:06 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:22:33.251 08:19:06 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:22:33.251 08:19:06 -- setup/common.sh@17 -- # local get=HugePages_Total 00:22:33.251 08:19:06 -- setup/common.sh@18 -- # local node= 00:22:33.251 08:19:06 -- setup/common.sh@19 -- # local var val 00:22:33.251 08:19:06 -- setup/common.sh@20 -- # local mem_f mem 00:22:33.251 08:19:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:22:33.251 08:19:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:22:33.251 08:19:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:22:33.251 08:19:06 -- setup/common.sh@28 -- # mapfile -t mem 00:22:33.251 08:19:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:22:33.251 08:19:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7669068 kB' 'MemAvailable: 9461224 kB' 'Buffers: 2436 kB' 'Cached: 2004740 kB' 'SwapCached: 0 kB' 'Active: 842484 kB' 'Inactive: 1278848 kB' 'Active(anon): 124620 kB' 'Inactive(anon): 0 kB' 'Active(file): 717864 kB' 'Inactive(file): 1278848 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 115728 kB' 'Mapped: 47972 kB' 'Shmem: 10464 kB' 'KReclaimable: 64768 kB' 'Slab: 139952 kB' 'SReclaimable: 64768 kB' 'SUnreclaim: 75184 kB' 'KernelStack: 6256 kB' 'PageTables: 3816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 334644 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54920 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:22:33.251 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.251 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.251 08:19:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:33.251 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.251 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.251 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.251 08:19:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:33.251 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.251 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.251 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.251 08:19:06 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:33.251 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.251 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.251 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.251 08:19:06 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:33.251 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.251 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.251 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.251 08:19:06 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:33.251 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.251 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.251 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.251 08:19:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:33.251 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.251 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.251 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.251 08:19:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:33.251 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.251 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.251 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.251 08:19:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:33.251 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.251 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.251 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.251 08:19:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:33.251 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.251 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.251 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.251 08:19:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:33.251 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.251 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.251 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.251 08:19:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:33.251 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.251 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.251 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.251 08:19:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:33.251 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.251 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.251 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.251 08:19:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:33.251 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.251 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.251 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.251 08:19:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:33.251 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.251 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.251 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.251 08:19:06 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:33.251 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.251 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.251 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.251 08:19:06 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:33.251 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.251 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.251 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.251 08:19:06 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:33.251 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.251 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.251 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.251 08:19:06 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:33.251 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.251 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.251 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.251 08:19:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:33.251 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.251 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.251 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.251 08:19:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:33.251 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.251 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.251 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.251 08:19:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:33.251 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.251 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.251 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.251 08:19:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:33.251 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.251 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.251 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.251 08:19:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:33.251 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.251 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.251 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.252 08:19:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:33.252 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.252 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.252 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.252 08:19:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:33.252 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.252 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.252 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.252 08:19:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:33.252 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.252 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.252 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.252 08:19:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:33.252 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.252 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.252 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.252 08:19:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:33.252 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.252 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.252 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.252 08:19:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:33.252 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.252 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.252 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.252 08:19:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:33.252 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.252 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.252 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.252 08:19:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:33.252 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.252 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.252 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.252 08:19:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:33.252 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.252 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.252 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.252 08:19:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:33.252 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.252 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.252 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.252 08:19:06 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:33.252 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.252 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.252 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.252 08:19:06 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:33.252 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.252 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.252 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.252 08:19:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:33.252 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.252 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.252 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.252 08:19:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:33.252 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.252 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.252 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.252 08:19:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:33.252 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.252 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.252 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.252 08:19:06 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:33.252 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.252 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.252 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.252 08:19:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:33.252 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.252 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.252 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.252 08:19:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:33.252 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.252 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.252 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.252 08:19:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:33.252 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.252 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.252 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.252 08:19:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:33.252 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.252 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.252 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.252 08:19:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:33.252 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.252 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.252 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.252 08:19:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:33.252 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.252 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.252 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.252 08:19:06 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:33.252 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.252 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.252 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.252 08:19:06 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:33.252 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.252 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.252 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.252 08:19:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:33.252 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.252 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.252 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.252 08:19:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:33.252 08:19:06 -- setup/common.sh@33 -- # echo 1024 00:22:33.252 08:19:06 -- setup/common.sh@33 -- # return 0 00:22:33.252 08:19:06 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:22:33.252 08:19:06 -- setup/hugepages.sh@112 -- # get_nodes 00:22:33.252 08:19:06 -- setup/hugepages.sh@27 -- # local node 00:22:33.252 08:19:06 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:22:33.252 08:19:06 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:22:33.252 08:19:06 -- setup/hugepages.sh@32 -- # no_nodes=1 00:22:33.252 08:19:06 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:22:33.252 08:19:06 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:22:33.252 08:19:06 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:22:33.252 08:19:06 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:22:33.252 08:19:06 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:22:33.252 08:19:06 -- setup/common.sh@18 -- # local node=0 00:22:33.252 08:19:06 -- setup/common.sh@19 -- # local var val 00:22:33.252 08:19:06 -- setup/common.sh@20 -- # local mem_f mem 00:22:33.252 08:19:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:22:33.252 08:19:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:22:33.252 08:19:06 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:22:33.252 08:19:06 -- setup/common.sh@28 -- # mapfile -t mem 00:22:33.252 08:19:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:22:33.252 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.252 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.252 08:19:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7678924 kB' 'MemUsed: 4563052 kB' 'SwapCached: 0 kB' 'Active: 842228 kB' 'Inactive: 1278848 kB' 'Active(anon): 124364 kB' 'Inactive(anon): 0 kB' 'Active(file): 717864 kB' 'Inactive(file): 1278848 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'FilePages: 2007176 kB' 'Mapped: 47972 kB' 'AnonPages: 115728 kB' 'Shmem: 10464 kB' 'KernelStack: 6256 kB' 'PageTables: 3816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 64768 kB' 'Slab: 139952 kB' 'SReclaimable: 64768 kB' 'SUnreclaim: 75184 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:22:33.252 08:19:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:33.252 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.252 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.252 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.252 08:19:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:33.252 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.252 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.252 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.252 08:19:06 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:33.252 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.252 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.252 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.252 08:19:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:33.252 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.252 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.252 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.252 08:19:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:33.252 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.252 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.252 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.252 08:19:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:33.252 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.252 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.252 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.252 08:19:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:33.252 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.252 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.253 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.253 08:19:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:33.253 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.253 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.253 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.253 08:19:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:33.253 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.253 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.253 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.253 08:19:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:33.253 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.253 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.253 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.253 08:19:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:33.253 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.253 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.253 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.253 08:19:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:33.253 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.253 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.253 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.253 08:19:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:33.253 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.253 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.253 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.253 08:19:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:33.253 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.253 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.253 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.253 08:19:06 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:33.253 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.253 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.253 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.253 08:19:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:33.253 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.253 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.253 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.253 08:19:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:33.253 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.253 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.253 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.253 08:19:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:33.253 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.253 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.253 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.253 08:19:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:33.253 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.253 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.253 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.253 08:19:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:33.253 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.253 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.253 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.253 08:19:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:33.253 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.253 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.253 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.253 08:19:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:33.253 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.253 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.253 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.253 08:19:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:33.253 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.253 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.253 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.253 08:19:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:33.253 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.253 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.253 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.253 08:19:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:33.253 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.253 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.253 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.253 08:19:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:33.253 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.253 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.253 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.253 08:19:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:33.253 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.253 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.253 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.253 08:19:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:33.253 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.253 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.253 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.253 08:19:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:33.253 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.253 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.253 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.253 08:19:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:33.253 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.253 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.253 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.253 08:19:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:33.253 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.253 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.253 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.253 08:19:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:33.253 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.253 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.253 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.253 08:19:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:33.253 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.253 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.253 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.253 08:19:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:33.253 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.253 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.253 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.253 08:19:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:33.253 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.253 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.253 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.253 08:19:06 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:33.253 08:19:06 -- setup/common.sh@32 -- # continue 00:22:33.253 08:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:22:33.253 08:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:22:33.253 08:19:06 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:33.253 08:19:06 -- setup/common.sh@33 -- # echo 0 00:22:33.253 08:19:06 -- setup/common.sh@33 -- # return 0 00:22:33.253 08:19:06 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:22:33.253 08:19:06 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:22:33.253 08:19:06 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:22:33.253 08:19:06 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:22:33.253 08:19:06 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:22:33.253 node0=1024 expecting 1024 00:22:33.253 08:19:06 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:22:33.253 00:22:33.253 real 0m1.227s 00:22:33.253 user 0m0.576s 00:22:33.253 sys 0m0.720s 00:22:33.253 08:19:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:33.253 08:19:06 -- common/autotest_common.sh@10 -- # set +x 00:22:33.253 ************************************ 00:22:33.253 END TEST no_shrink_alloc 00:22:33.253 ************************************ 00:22:33.253 08:19:06 -- setup/hugepages.sh@217 -- # clear_hp 00:22:33.253 08:19:06 -- setup/hugepages.sh@37 -- # local node hp 00:22:33.253 08:19:06 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:22:33.253 08:19:06 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:22:33.253 08:19:06 -- setup/hugepages.sh@41 -- # echo 0 00:22:33.253 08:19:06 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:22:33.253 08:19:06 -- setup/hugepages.sh@41 -- # echo 0 00:22:33.253 08:19:06 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:22:33.253 08:19:06 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:22:33.253 00:22:33.253 real 0m5.624s 00:22:33.253 user 0m2.444s 00:22:33.253 sys 0m3.365s 00:22:33.253 08:19:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:33.253 08:19:06 -- common/autotest_common.sh@10 -- # set +x 00:22:33.253 ************************************ 00:22:33.253 END TEST hugepages 00:22:33.253 ************************************ 00:22:33.253 08:19:06 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:22:33.253 08:19:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:33.253 08:19:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:33.253 08:19:06 -- common/autotest_common.sh@10 -- # set +x 00:22:33.253 ************************************ 00:22:33.253 START TEST driver 00:22:33.253 ************************************ 00:22:33.253 08:19:06 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:22:33.513 * Looking for test storage... 00:22:33.513 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:22:33.513 08:19:06 -- setup/driver.sh@68 -- # setup reset 00:22:33.513 08:19:06 -- setup/common.sh@9 -- # [[ reset == output ]] 00:22:33.513 08:19:06 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:22:34.081 08:19:07 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:22:34.081 08:19:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:34.081 08:19:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:34.081 08:19:07 -- common/autotest_common.sh@10 -- # set +x 00:22:34.081 ************************************ 00:22:34.081 START TEST guess_driver 00:22:34.081 ************************************ 00:22:34.081 08:19:07 -- common/autotest_common.sh@1104 -- # guess_driver 00:22:34.081 08:19:07 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:22:34.081 08:19:07 -- setup/driver.sh@47 -- # local fail=0 00:22:34.081 08:19:07 -- setup/driver.sh@49 -- # pick_driver 00:22:34.081 08:19:07 -- setup/driver.sh@36 -- # vfio 00:22:34.081 08:19:07 -- setup/driver.sh@21 -- # local iommu_grups 00:22:34.081 08:19:07 -- setup/driver.sh@22 -- # local unsafe_vfio 00:22:34.081 08:19:07 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:22:34.081 08:19:07 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:22:34.081 08:19:07 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:22:34.081 08:19:07 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:22:34.081 08:19:07 -- setup/driver.sh@32 -- # return 1 00:22:34.081 08:19:07 -- setup/driver.sh@38 -- # uio 00:22:34.081 08:19:07 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:22:34.081 08:19:07 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:22:34.081 08:19:07 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:22:34.081 08:19:07 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:22:34.081 08:19:07 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:22:34.081 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:22:34.081 08:19:07 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:22:34.081 Looking for driver=uio_pci_generic 00:22:34.081 08:19:07 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:22:34.081 08:19:07 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:22:34.081 08:19:07 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:22:34.081 08:19:07 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:22:34.081 08:19:07 -- setup/driver.sh@45 -- # setup output config 00:22:34.081 08:19:07 -- setup/common.sh@9 -- # [[ output == output ]] 00:22:34.081 08:19:07 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:22:35.018 08:19:08 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:22:35.018 08:19:08 -- setup/driver.sh@58 -- # continue 00:22:35.018 08:19:08 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:22:35.018 08:19:08 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:22:35.018 08:19:08 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:22:35.018 08:19:08 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:22:35.018 08:19:08 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:22:35.018 08:19:08 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:22:35.018 08:19:08 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:22:35.018 08:19:08 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:22:35.018 08:19:08 -- setup/driver.sh@65 -- # setup reset 00:22:35.018 08:19:08 -- setup/common.sh@9 -- # [[ reset == output ]] 00:22:35.018 08:19:08 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:22:35.958 00:22:35.958 real 0m1.709s 00:22:35.958 user 0m0.603s 00:22:35.958 sys 0m1.166s 00:22:35.958 08:19:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:35.958 08:19:08 -- common/autotest_common.sh@10 -- # set +x 00:22:35.958 ************************************ 00:22:35.958 END TEST guess_driver 00:22:35.958 ************************************ 00:22:35.958 00:22:35.958 real 0m2.483s 00:22:35.958 user 0m0.864s 00:22:35.958 sys 0m1.766s 00:22:35.958 08:19:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:35.958 08:19:08 -- common/autotest_common.sh@10 -- # set +x 00:22:35.958 ************************************ 00:22:35.958 END TEST driver 00:22:35.958 ************************************ 00:22:35.958 08:19:09 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:22:35.958 08:19:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:35.958 08:19:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:35.958 08:19:09 -- common/autotest_common.sh@10 -- # set +x 00:22:35.958 ************************************ 00:22:35.958 START TEST devices 00:22:35.958 ************************************ 00:22:35.958 08:19:09 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:22:35.958 * Looking for test storage... 00:22:35.958 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:22:35.958 08:19:09 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:22:35.958 08:19:09 -- setup/devices.sh@192 -- # setup reset 00:22:35.958 08:19:09 -- setup/common.sh@9 -- # [[ reset == output ]] 00:22:35.958 08:19:09 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:22:36.897 08:19:10 -- setup/devices.sh@194 -- # get_zoned_devs 00:22:36.897 08:19:10 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:22:36.897 08:19:10 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:22:36.897 08:19:10 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:22:36.897 08:19:10 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:22:36.897 08:19:10 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:22:36.897 08:19:10 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:22:36.897 08:19:10 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:22:36.897 08:19:10 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:22:36.897 08:19:10 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:22:36.897 08:19:10 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:22:36.897 08:19:10 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:22:36.897 08:19:10 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:22:36.897 08:19:10 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:22:36.897 08:19:10 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:22:36.897 08:19:10 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n2 00:22:36.897 08:19:10 -- common/autotest_common.sh@1647 -- # local device=nvme1n2 00:22:36.897 08:19:10 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:22:36.897 08:19:10 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:22:36.897 08:19:10 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:22:36.897 08:19:10 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n3 00:22:36.897 08:19:10 -- common/autotest_common.sh@1647 -- # local device=nvme1n3 00:22:36.897 08:19:10 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:22:36.897 08:19:10 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:22:36.897 08:19:10 -- setup/devices.sh@196 -- # blocks=() 00:22:36.897 08:19:10 -- setup/devices.sh@196 -- # declare -a blocks 00:22:36.897 08:19:10 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:22:36.897 08:19:10 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:22:36.897 08:19:10 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:22:36.897 08:19:10 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:22:36.897 08:19:10 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:22:36.897 08:19:10 -- setup/devices.sh@201 -- # ctrl=nvme0 00:22:36.897 08:19:10 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:22:36.897 08:19:10 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:22:36.897 08:19:10 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:22:36.897 08:19:10 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:22:36.898 08:19:10 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:22:36.898 No valid GPT data, bailing 00:22:36.898 08:19:10 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:22:36.898 08:19:10 -- scripts/common.sh@393 -- # pt= 00:22:36.898 08:19:10 -- scripts/common.sh@394 -- # return 1 00:22:36.898 08:19:10 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:22:36.898 08:19:10 -- setup/common.sh@76 -- # local dev=nvme0n1 00:22:36.898 08:19:10 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:22:36.898 08:19:10 -- setup/common.sh@80 -- # echo 5368709120 00:22:36.898 08:19:10 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:22:36.898 08:19:10 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:22:36.898 08:19:10 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:22:36.898 08:19:10 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:22:36.898 08:19:10 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:22:36.898 08:19:10 -- setup/devices.sh@201 -- # ctrl=nvme1 00:22:36.898 08:19:10 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:22:36.898 08:19:10 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:22:36.898 08:19:10 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:22:36.898 08:19:10 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:22:36.898 08:19:10 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:22:36.898 No valid GPT data, bailing 00:22:36.898 08:19:10 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:22:36.898 08:19:10 -- scripts/common.sh@393 -- # pt= 00:22:36.898 08:19:10 -- scripts/common.sh@394 -- # return 1 00:22:36.898 08:19:10 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:22:36.898 08:19:10 -- setup/common.sh@76 -- # local dev=nvme1n1 00:22:36.898 08:19:10 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:22:36.898 08:19:10 -- setup/common.sh@80 -- # echo 4294967296 00:22:36.898 08:19:10 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:22:36.898 08:19:10 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:22:36.898 08:19:10 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:22:36.898 08:19:10 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:22:36.898 08:19:10 -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:22:36.898 08:19:10 -- setup/devices.sh@201 -- # ctrl=nvme1 00:22:36.898 08:19:10 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:22:36.898 08:19:10 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:22:36.898 08:19:10 -- setup/devices.sh@204 -- # block_in_use nvme1n2 00:22:36.898 08:19:10 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:22:36.898 08:19:10 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:22:37.158 No valid GPT data, bailing 00:22:37.158 08:19:10 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:22:37.158 08:19:10 -- scripts/common.sh@393 -- # pt= 00:22:37.158 08:19:10 -- scripts/common.sh@394 -- # return 1 00:22:37.158 08:19:10 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n2 00:22:37.158 08:19:10 -- setup/common.sh@76 -- # local dev=nvme1n2 00:22:37.158 08:19:10 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n2 ]] 00:22:37.158 08:19:10 -- setup/common.sh@80 -- # echo 4294967296 00:22:37.158 08:19:10 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:22:37.158 08:19:10 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:22:37.158 08:19:10 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:22:37.158 08:19:10 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:22:37.158 08:19:10 -- setup/devices.sh@201 -- # ctrl=nvme1n3 00:22:37.158 08:19:10 -- setup/devices.sh@201 -- # ctrl=nvme1 00:22:37.158 08:19:10 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:22:37.158 08:19:10 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:22:37.158 08:19:10 -- setup/devices.sh@204 -- # block_in_use nvme1n3 00:22:37.158 08:19:10 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:22:37.158 08:19:10 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:22:37.158 No valid GPT data, bailing 00:22:37.158 08:19:10 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:22:37.158 08:19:10 -- scripts/common.sh@393 -- # pt= 00:22:37.158 08:19:10 -- scripts/common.sh@394 -- # return 1 00:22:37.158 08:19:10 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n3 00:22:37.158 08:19:10 -- setup/common.sh@76 -- # local dev=nvme1n3 00:22:37.158 08:19:10 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n3 ]] 00:22:37.158 08:19:10 -- setup/common.sh@80 -- # echo 4294967296 00:22:37.158 08:19:10 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:22:37.158 08:19:10 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:22:37.158 08:19:10 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:22:37.158 08:19:10 -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:22:37.158 08:19:10 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:22:37.158 08:19:10 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:22:37.158 08:19:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:37.158 08:19:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:37.158 08:19:10 -- common/autotest_common.sh@10 -- # set +x 00:22:37.158 ************************************ 00:22:37.158 START TEST nvme_mount 00:22:37.158 ************************************ 00:22:37.158 08:19:10 -- common/autotest_common.sh@1104 -- # nvme_mount 00:22:37.158 08:19:10 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:22:37.158 08:19:10 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:22:37.158 08:19:10 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:22:37.158 08:19:10 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:22:37.158 08:19:10 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:22:37.158 08:19:10 -- setup/common.sh@39 -- # local disk=nvme0n1 00:22:37.158 08:19:10 -- setup/common.sh@40 -- # local part_no=1 00:22:37.158 08:19:10 -- setup/common.sh@41 -- # local size=1073741824 00:22:37.158 08:19:10 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:22:37.158 08:19:10 -- setup/common.sh@44 -- # parts=() 00:22:37.158 08:19:10 -- setup/common.sh@44 -- # local parts 00:22:37.158 08:19:10 -- setup/common.sh@46 -- # (( part = 1 )) 00:22:37.158 08:19:10 -- setup/common.sh@46 -- # (( part <= part_no )) 00:22:37.158 08:19:10 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:22:37.158 08:19:10 -- setup/common.sh@46 -- # (( part++ )) 00:22:37.158 08:19:10 -- setup/common.sh@46 -- # (( part <= part_no )) 00:22:37.158 08:19:10 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:22:37.158 08:19:10 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:22:37.158 08:19:10 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:22:38.096 Creating new GPT entries in memory. 00:22:38.096 GPT data structures destroyed! You may now partition the disk using fdisk or 00:22:38.096 other utilities. 00:22:38.096 08:19:11 -- setup/common.sh@57 -- # (( part = 1 )) 00:22:38.096 08:19:11 -- setup/common.sh@57 -- # (( part <= part_no )) 00:22:38.096 08:19:11 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:22:38.096 08:19:11 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:22:38.096 08:19:11 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:22:39.505 Creating new GPT entries in memory. 00:22:39.505 The operation has completed successfully. 00:22:39.505 08:19:12 -- setup/common.sh@57 -- # (( part++ )) 00:22:39.505 08:19:12 -- setup/common.sh@57 -- # (( part <= part_no )) 00:22:39.505 08:19:12 -- setup/common.sh@62 -- # wait 52306 00:22:39.505 08:19:12 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:22:39.505 08:19:12 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:22:39.505 08:19:12 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:22:39.505 08:19:12 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:22:39.505 08:19:12 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:22:39.505 08:19:12 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:22:39.505 08:19:12 -- setup/devices.sh@105 -- # verify 0000:00:06.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:22:39.505 08:19:12 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:22:39.505 08:19:12 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:22:39.505 08:19:12 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:22:39.505 08:19:12 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:22:39.505 08:19:12 -- setup/devices.sh@53 -- # local found=0 00:22:39.505 08:19:12 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:22:39.505 08:19:12 -- setup/devices.sh@56 -- # : 00:22:39.505 08:19:12 -- setup/devices.sh@59 -- # local pci status 00:22:39.505 08:19:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:39.505 08:19:12 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:22:39.505 08:19:12 -- setup/devices.sh@47 -- # setup output config 00:22:39.505 08:19:12 -- setup/common.sh@9 -- # [[ output == output ]] 00:22:39.505 08:19:12 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:22:39.505 08:19:12 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:22:39.505 08:19:12 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:22:39.505 08:19:12 -- setup/devices.sh@63 -- # found=1 00:22:39.505 08:19:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:39.505 08:19:12 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:22:39.505 08:19:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:40.071 08:19:13 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:22:40.071 08:19:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:40.071 08:19:13 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:22:40.071 08:19:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:40.071 08:19:13 -- setup/devices.sh@66 -- # (( found == 1 )) 00:22:40.071 08:19:13 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:22:40.071 08:19:13 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:22:40.071 08:19:13 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:22:40.071 08:19:13 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:22:40.071 08:19:13 -- setup/devices.sh@110 -- # cleanup_nvme 00:22:40.071 08:19:13 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:22:40.071 08:19:13 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:22:40.071 08:19:13 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:22:40.071 08:19:13 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:22:40.071 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:22:40.071 08:19:13 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:22:40.071 08:19:13 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:22:40.636 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:22:40.636 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:22:40.636 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:22:40.636 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:22:40.636 08:19:13 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:22:40.636 08:19:13 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:22:40.636 08:19:13 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:22:40.636 08:19:13 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:22:40.636 08:19:13 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:22:40.636 08:19:13 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:22:40.636 08:19:13 -- setup/devices.sh@116 -- # verify 0000:00:06.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:22:40.636 08:19:13 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:22:40.636 08:19:13 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:22:40.636 08:19:13 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:22:40.636 08:19:13 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:22:40.636 08:19:13 -- setup/devices.sh@53 -- # local found=0 00:22:40.636 08:19:13 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:22:40.636 08:19:13 -- setup/devices.sh@56 -- # : 00:22:40.636 08:19:13 -- setup/devices.sh@59 -- # local pci status 00:22:40.636 08:19:13 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:22:40.636 08:19:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:40.636 08:19:13 -- setup/devices.sh@47 -- # setup output config 00:22:40.636 08:19:13 -- setup/common.sh@9 -- # [[ output == output ]] 00:22:40.636 08:19:13 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:22:40.636 08:19:13 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:22:40.636 08:19:13 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:22:40.636 08:19:13 -- setup/devices.sh@63 -- # found=1 00:22:40.636 08:19:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:40.636 08:19:13 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:22:40.636 08:19:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:41.203 08:19:14 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:22:41.203 08:19:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:41.203 08:19:14 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:22:41.204 08:19:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:41.462 08:19:14 -- setup/devices.sh@66 -- # (( found == 1 )) 00:22:41.462 08:19:14 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:22:41.462 08:19:14 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:22:41.462 08:19:14 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:22:41.462 08:19:14 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:22:41.462 08:19:14 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:22:41.462 08:19:14 -- setup/devices.sh@125 -- # verify 0000:00:06.0 data@nvme0n1 '' '' 00:22:41.462 08:19:14 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:22:41.462 08:19:14 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:22:41.462 08:19:14 -- setup/devices.sh@50 -- # local mount_point= 00:22:41.462 08:19:14 -- setup/devices.sh@51 -- # local test_file= 00:22:41.462 08:19:14 -- setup/devices.sh@53 -- # local found=0 00:22:41.462 08:19:14 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:22:41.462 08:19:14 -- setup/devices.sh@59 -- # local pci status 00:22:41.462 08:19:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:41.463 08:19:14 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:22:41.463 08:19:14 -- setup/devices.sh@47 -- # setup output config 00:22:41.463 08:19:14 -- setup/common.sh@9 -- # [[ output == output ]] 00:22:41.463 08:19:14 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:22:41.722 08:19:14 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:22:41.722 08:19:14 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:22:41.722 08:19:14 -- setup/devices.sh@63 -- # found=1 00:22:41.722 08:19:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:41.722 08:19:14 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:22:41.722 08:19:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:41.980 08:19:15 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:22:41.980 08:19:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:42.265 08:19:15 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:22:42.265 08:19:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:42.265 08:19:15 -- setup/devices.sh@66 -- # (( found == 1 )) 00:22:42.265 08:19:15 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:22:42.265 08:19:15 -- setup/devices.sh@68 -- # return 0 00:22:42.265 08:19:15 -- setup/devices.sh@128 -- # cleanup_nvme 00:22:42.265 08:19:15 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:22:42.265 08:19:15 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:22:42.265 08:19:15 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:22:42.265 08:19:15 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:22:42.265 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:22:42.265 00:22:42.265 real 0m5.099s 00:22:42.265 user 0m1.175s 00:22:42.265 sys 0m1.649s 00:22:42.265 08:19:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:42.265 08:19:15 -- common/autotest_common.sh@10 -- # set +x 00:22:42.265 ************************************ 00:22:42.265 END TEST nvme_mount 00:22:42.265 ************************************ 00:22:42.265 08:19:15 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:22:42.265 08:19:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:42.265 08:19:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:42.265 08:19:15 -- common/autotest_common.sh@10 -- # set +x 00:22:42.265 ************************************ 00:22:42.265 START TEST dm_mount 00:22:42.265 ************************************ 00:22:42.265 08:19:15 -- common/autotest_common.sh@1104 -- # dm_mount 00:22:42.265 08:19:15 -- setup/devices.sh@144 -- # pv=nvme0n1 00:22:42.265 08:19:15 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:22:42.265 08:19:15 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:22:42.265 08:19:15 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:22:42.265 08:19:15 -- setup/common.sh@39 -- # local disk=nvme0n1 00:22:42.265 08:19:15 -- setup/common.sh@40 -- # local part_no=2 00:22:42.265 08:19:15 -- setup/common.sh@41 -- # local size=1073741824 00:22:42.265 08:19:15 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:22:42.265 08:19:15 -- setup/common.sh@44 -- # parts=() 00:22:42.265 08:19:15 -- setup/common.sh@44 -- # local parts 00:22:42.265 08:19:15 -- setup/common.sh@46 -- # (( part = 1 )) 00:22:42.265 08:19:15 -- setup/common.sh@46 -- # (( part <= part_no )) 00:22:42.265 08:19:15 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:22:42.265 08:19:15 -- setup/common.sh@46 -- # (( part++ )) 00:22:42.265 08:19:15 -- setup/common.sh@46 -- # (( part <= part_no )) 00:22:42.265 08:19:15 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:22:42.265 08:19:15 -- setup/common.sh@46 -- # (( part++ )) 00:22:42.265 08:19:15 -- setup/common.sh@46 -- # (( part <= part_no )) 00:22:42.265 08:19:15 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:22:42.265 08:19:15 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:22:42.265 08:19:15 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:22:43.643 Creating new GPT entries in memory. 00:22:43.643 GPT data structures destroyed! You may now partition the disk using fdisk or 00:22:43.643 other utilities. 00:22:43.643 08:19:16 -- setup/common.sh@57 -- # (( part = 1 )) 00:22:43.643 08:19:16 -- setup/common.sh@57 -- # (( part <= part_no )) 00:22:43.643 08:19:16 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:22:43.643 08:19:16 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:22:43.643 08:19:16 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:22:44.578 Creating new GPT entries in memory. 00:22:44.578 The operation has completed successfully. 00:22:44.578 08:19:17 -- setup/common.sh@57 -- # (( part++ )) 00:22:44.578 08:19:17 -- setup/common.sh@57 -- # (( part <= part_no )) 00:22:44.578 08:19:17 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:22:44.578 08:19:17 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:22:44.578 08:19:17 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:22:45.514 The operation has completed successfully. 00:22:45.514 08:19:18 -- setup/common.sh@57 -- # (( part++ )) 00:22:45.514 08:19:18 -- setup/common.sh@57 -- # (( part <= part_no )) 00:22:45.514 08:19:18 -- setup/common.sh@62 -- # wait 52799 00:22:45.514 08:19:18 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:22:45.514 08:19:18 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:22:45.514 08:19:18 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:22:45.514 08:19:18 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:22:45.514 08:19:18 -- setup/devices.sh@160 -- # for t in {1..5} 00:22:45.514 08:19:18 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:22:45.514 08:19:18 -- setup/devices.sh@161 -- # break 00:22:45.514 08:19:18 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:22:45.514 08:19:18 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:22:45.514 08:19:18 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:22:45.514 08:19:18 -- setup/devices.sh@166 -- # dm=dm-0 00:22:45.514 08:19:18 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:22:45.514 08:19:18 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:22:45.514 08:19:18 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:22:45.514 08:19:18 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:22:45.514 08:19:18 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:22:45.514 08:19:18 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:22:45.514 08:19:18 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:22:45.514 08:19:18 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:22:45.514 08:19:18 -- setup/devices.sh@174 -- # verify 0000:00:06.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:22:45.514 08:19:18 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:22:45.514 08:19:18 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:22:45.514 08:19:18 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:22:45.514 08:19:18 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:22:45.514 08:19:18 -- setup/devices.sh@53 -- # local found=0 00:22:45.514 08:19:18 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:22:45.514 08:19:18 -- setup/devices.sh@56 -- # : 00:22:45.514 08:19:18 -- setup/devices.sh@59 -- # local pci status 00:22:45.514 08:19:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:45.514 08:19:18 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:22:45.514 08:19:18 -- setup/devices.sh@47 -- # setup output config 00:22:45.514 08:19:18 -- setup/common.sh@9 -- # [[ output == output ]] 00:22:45.514 08:19:18 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:22:45.773 08:19:18 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:22:45.773 08:19:18 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:22:45.773 08:19:18 -- setup/devices.sh@63 -- # found=1 00:22:45.773 08:19:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:45.773 08:19:18 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:22:45.773 08:19:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:46.066 08:19:19 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:22:46.066 08:19:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:46.325 08:19:19 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:22:46.325 08:19:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:46.325 08:19:19 -- setup/devices.sh@66 -- # (( found == 1 )) 00:22:46.325 08:19:19 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:22:46.325 08:19:19 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:22:46.325 08:19:19 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:22:46.325 08:19:19 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:22:46.325 08:19:19 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:22:46.325 08:19:19 -- setup/devices.sh@184 -- # verify 0000:00:06.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:22:46.325 08:19:19 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:22:46.325 08:19:19 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:22:46.325 08:19:19 -- setup/devices.sh@50 -- # local mount_point= 00:22:46.325 08:19:19 -- setup/devices.sh@51 -- # local test_file= 00:22:46.325 08:19:19 -- setup/devices.sh@53 -- # local found=0 00:22:46.325 08:19:19 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:22:46.325 08:19:19 -- setup/devices.sh@59 -- # local pci status 00:22:46.325 08:19:19 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:22:46.325 08:19:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:46.325 08:19:19 -- setup/devices.sh@47 -- # setup output config 00:22:46.325 08:19:19 -- setup/common.sh@9 -- # [[ output == output ]] 00:22:46.325 08:19:19 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:22:46.584 08:19:19 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:22:46.584 08:19:19 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:22:46.584 08:19:19 -- setup/devices.sh@63 -- # found=1 00:22:46.584 08:19:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:46.584 08:19:19 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:22:46.584 08:19:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:47.152 08:19:20 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:22:47.152 08:19:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:47.152 08:19:20 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:22:47.152 08:19:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:47.152 08:19:20 -- setup/devices.sh@66 -- # (( found == 1 )) 00:22:47.152 08:19:20 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:22:47.152 08:19:20 -- setup/devices.sh@68 -- # return 0 00:22:47.152 08:19:20 -- setup/devices.sh@187 -- # cleanup_dm 00:22:47.152 08:19:20 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:22:47.152 08:19:20 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:22:47.152 08:19:20 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:22:47.152 08:19:20 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:22:47.152 08:19:20 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:22:47.152 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:22:47.152 08:19:20 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:22:47.152 08:19:20 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:22:47.410 00:22:47.410 real 0m4.953s 00:22:47.410 user 0m0.785s 00:22:47.410 sys 0m1.110s 00:22:47.410 08:19:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:47.410 08:19:20 -- common/autotest_common.sh@10 -- # set +x 00:22:47.410 ************************************ 00:22:47.410 END TEST dm_mount 00:22:47.410 ************************************ 00:22:47.410 08:19:20 -- setup/devices.sh@1 -- # cleanup 00:22:47.410 08:19:20 -- setup/devices.sh@11 -- # cleanup_nvme 00:22:47.410 08:19:20 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:22:47.410 08:19:20 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:22:47.410 08:19:20 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:22:47.410 08:19:20 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:22:47.410 08:19:20 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:22:47.670 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:22:47.670 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:22:47.670 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:22:47.670 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:22:47.670 08:19:20 -- setup/devices.sh@12 -- # cleanup_dm 00:22:47.670 08:19:20 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:22:47.670 08:19:20 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:22:47.670 08:19:20 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:22:47.670 08:19:20 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:22:47.670 08:19:20 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:22:47.670 08:19:20 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:22:47.670 00:22:47.670 real 0m11.794s 00:22:47.670 user 0m2.651s 00:22:47.670 sys 0m3.553s 00:22:47.670 08:19:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:47.670 08:19:20 -- common/autotest_common.sh@10 -- # set +x 00:22:47.670 ************************************ 00:22:47.670 END TEST devices 00:22:47.670 ************************************ 00:22:47.670 00:22:47.670 real 0m25.535s 00:22:47.670 user 0m8.092s 00:22:47.670 sys 0m12.213s 00:22:47.670 08:19:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:47.670 08:19:20 -- common/autotest_common.sh@10 -- # set +x 00:22:47.670 ************************************ 00:22:47.670 END TEST setup.sh 00:22:47.670 ************************************ 00:22:47.670 08:19:20 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:22:47.929 Hugepages 00:22:47.929 node hugesize free / total 00:22:47.929 node0 1048576kB 0 / 0 00:22:47.929 node0 2048kB 2048 / 2048 00:22:47.929 00:22:47.929 Type BDF Vendor Device NUMA Driver Device Block devices 00:22:47.929 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:22:48.188 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:22:48.188 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:22:48.188 08:19:21 -- spdk/autotest.sh@141 -- # uname -s 00:22:48.188 08:19:21 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:22:48.188 08:19:21 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:22:48.188 08:19:21 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:49.130 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:49.130 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:22:49.130 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:22:49.130 08:19:22 -- common/autotest_common.sh@1517 -- # sleep 1 00:22:50.516 08:19:23 -- common/autotest_common.sh@1518 -- # bdfs=() 00:22:50.516 08:19:23 -- common/autotest_common.sh@1518 -- # local bdfs 00:22:50.516 08:19:23 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:22:50.516 08:19:23 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:22:50.516 08:19:23 -- common/autotest_common.sh@1498 -- # bdfs=() 00:22:50.516 08:19:23 -- common/autotest_common.sh@1498 -- # local bdfs 00:22:50.516 08:19:23 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:22:50.516 08:19:23 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:22:50.516 08:19:23 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:22:50.516 08:19:23 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:22:50.516 08:19:23 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:22:50.516 08:19:23 -- common/autotest_common.sh@1521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:22:50.516 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:50.775 Waiting for block devices as requested 00:22:50.775 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:22:50.775 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:22:50.775 08:19:24 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:22:50.775 08:19:24 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:22:50.775 08:19:24 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:22:50.775 08:19:24 -- common/autotest_common.sh@1487 -- # grep 0000:00:06.0/nvme/nvme 00:22:50.775 08:19:24 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:22:50.775 08:19:24 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 ]] 00:22:50.775 08:19:24 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:22:50.775 08:19:24 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:22:50.775 08:19:24 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:22:50.775 08:19:24 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:22:50.775 08:19:24 -- common/autotest_common.sh@1530 -- # grep oacs 00:22:50.775 08:19:24 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:22:50.775 08:19:24 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:22:50.775 08:19:24 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:22:50.775 08:19:24 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:22:50.775 08:19:24 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:22:50.775 08:19:24 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:22:50.775 08:19:24 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:22:50.775 08:19:24 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:22:50.775 08:19:24 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:22:50.775 08:19:24 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:22:50.775 08:19:24 -- common/autotest_common.sh@1542 -- # continue 00:22:50.775 08:19:24 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:22:50.775 08:19:24 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:07.0 00:22:50.775 08:19:24 -- common/autotest_common.sh@1487 -- # grep 0000:00:07.0/nvme/nvme 00:22:50.775 08:19:24 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:22:50.775 08:19:24 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:22:50.775 08:19:24 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 ]] 00:22:50.775 08:19:24 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:22:50.775 08:19:24 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:22:50.775 08:19:24 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme1 00:22:50.775 08:19:24 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme1 ]] 00:22:50.775 08:19:24 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme1 00:22:50.775 08:19:24 -- common/autotest_common.sh@1530 -- # grep oacs 00:22:50.775 08:19:24 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:22:51.034 08:19:24 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:22:51.034 08:19:24 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:22:51.034 08:19:24 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:22:51.034 08:19:24 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme1 00:22:51.034 08:19:24 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:22:51.034 08:19:24 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:22:51.034 08:19:24 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:22:51.034 08:19:24 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:22:51.034 08:19:24 -- common/autotest_common.sh@1542 -- # continue 00:22:51.034 08:19:24 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:22:51.034 08:19:24 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:51.034 08:19:24 -- common/autotest_common.sh@10 -- # set +x 00:22:51.034 08:19:24 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:22:51.034 08:19:24 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:51.034 08:19:24 -- common/autotest_common.sh@10 -- # set +x 00:22:51.034 08:19:24 -- spdk/autotest.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:51.602 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:51.602 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:22:51.860 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:22:51.860 08:19:25 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:22:51.860 08:19:25 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:51.860 08:19:25 -- common/autotest_common.sh@10 -- # set +x 00:22:51.860 08:19:25 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:22:51.860 08:19:25 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:22:51.860 08:19:25 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:22:51.860 08:19:25 -- common/autotest_common.sh@1562 -- # bdfs=() 00:22:51.860 08:19:25 -- common/autotest_common.sh@1562 -- # local bdfs 00:22:51.860 08:19:25 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:22:51.860 08:19:25 -- common/autotest_common.sh@1498 -- # bdfs=() 00:22:51.860 08:19:25 -- common/autotest_common.sh@1498 -- # local bdfs 00:22:51.860 08:19:25 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:22:51.860 08:19:25 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:22:51.860 08:19:25 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:22:51.860 08:19:25 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:22:51.860 08:19:25 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:22:51.860 08:19:25 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:22:51.860 08:19:25 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:22:51.860 08:19:25 -- common/autotest_common.sh@1565 -- # device=0x0010 00:22:51.861 08:19:25 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:22:51.861 08:19:25 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:22:51.861 08:19:25 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:07.0/device 00:22:51.861 08:19:25 -- common/autotest_common.sh@1565 -- # device=0x0010 00:22:51.861 08:19:25 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:22:51.861 08:19:25 -- common/autotest_common.sh@1571 -- # printf '%s\n' 00:22:51.861 08:19:25 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:22:51.861 08:19:25 -- common/autotest_common.sh@1578 -- # return 0 00:22:51.861 08:19:25 -- spdk/autotest.sh@161 -- # '[' 0 -eq 1 ']' 00:22:51.861 08:19:25 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:22:51.861 08:19:25 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:22:51.861 08:19:25 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:22:51.861 08:19:25 -- spdk/autotest.sh@173 -- # timing_enter lib 00:22:51.861 08:19:25 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:51.861 08:19:25 -- common/autotest_common.sh@10 -- # set +x 00:22:51.861 08:19:25 -- spdk/autotest.sh@175 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:22:51.861 08:19:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:51.861 08:19:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:51.861 08:19:25 -- common/autotest_common.sh@10 -- # set +x 00:22:51.861 ************************************ 00:22:51.861 START TEST env 00:22:51.861 ************************************ 00:22:51.861 08:19:25 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:22:52.143 * Looking for test storage... 00:22:52.143 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:22:52.143 08:19:25 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:22:52.143 08:19:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:52.143 08:19:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:52.143 08:19:25 -- common/autotest_common.sh@10 -- # set +x 00:22:52.143 ************************************ 00:22:52.143 START TEST env_memory 00:22:52.143 ************************************ 00:22:52.143 08:19:25 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:22:52.143 00:22:52.143 00:22:52.143 CUnit - A unit testing framework for C - Version 2.1-3 00:22:52.143 http://cunit.sourceforge.net/ 00:22:52.143 00:22:52.143 00:22:52.143 Suite: memory 00:22:52.143 Test: alloc and free memory map ...[2024-04-17 08:19:25.307227] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:22:52.143 passed 00:22:52.143 Test: mem map translation ...[2024-04-17 08:19:25.333153] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:22:52.143 [2024-04-17 08:19:25.333227] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:22:52.143 [2024-04-17 08:19:25.333280] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:22:52.143 [2024-04-17 08:19:25.333290] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:22:52.143 passed 00:22:52.143 Test: mem map registration ...[2024-04-17 08:19:25.384825] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:22:52.143 [2024-04-17 08:19:25.384890] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:22:52.143 passed 00:22:52.143 Test: mem map adjacent registrations ...passed 00:22:52.143 00:22:52.143 Run Summary: Type Total Ran Passed Failed Inactive 00:22:52.143 suites 1 1 n/a 0 0 00:22:52.143 tests 4 4 4 0 0 00:22:52.143 asserts 152 152 152 0 n/a 00:22:52.143 00:22:52.143 Elapsed time = 0.169 seconds 00:22:52.143 00:22:52.143 real 0m0.180s 00:22:52.143 user 0m0.167s 00:22:52.143 sys 0m0.012s 00:22:52.143 08:19:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:52.143 08:19:25 -- common/autotest_common.sh@10 -- # set +x 00:22:52.143 ************************************ 00:22:52.143 END TEST env_memory 00:22:52.143 ************************************ 00:22:52.406 08:19:25 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:22:52.406 08:19:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:52.406 08:19:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:52.406 08:19:25 -- common/autotest_common.sh@10 -- # set +x 00:22:52.406 ************************************ 00:22:52.406 START TEST env_vtophys 00:22:52.406 ************************************ 00:22:52.406 08:19:25 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:22:52.406 EAL: lib.eal log level changed from notice to debug 00:22:52.406 EAL: Detected lcore 0 as core 0 on socket 0 00:22:52.406 EAL: Detected lcore 1 as core 0 on socket 0 00:22:52.406 EAL: Detected lcore 2 as core 0 on socket 0 00:22:52.406 EAL: Detected lcore 3 as core 0 on socket 0 00:22:52.406 EAL: Detected lcore 4 as core 0 on socket 0 00:22:52.406 EAL: Detected lcore 5 as core 0 on socket 0 00:22:52.406 EAL: Detected lcore 6 as core 0 on socket 0 00:22:52.406 EAL: Detected lcore 7 as core 0 on socket 0 00:22:52.406 EAL: Detected lcore 8 as core 0 on socket 0 00:22:52.406 EAL: Detected lcore 9 as core 0 on socket 0 00:22:52.406 EAL: Maximum logical cores by configuration: 128 00:22:52.406 EAL: Detected CPU lcores: 10 00:22:52.406 EAL: Detected NUMA nodes: 1 00:22:52.406 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:22:52.406 EAL: Detected shared linkage of DPDK 00:22:52.406 EAL: No shared files mode enabled, IPC will be disabled 00:22:52.406 EAL: Selected IOVA mode 'PA' 00:22:52.406 EAL: Probing VFIO support... 00:22:52.406 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:22:52.406 EAL: VFIO modules not loaded, skipping VFIO support... 00:22:52.406 EAL: Ask a virtual area of 0x2e000 bytes 00:22:52.406 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:22:52.406 EAL: Setting up physically contiguous memory... 00:22:52.406 EAL: Setting maximum number of open files to 524288 00:22:52.406 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:22:52.406 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:22:52.406 EAL: Ask a virtual area of 0x61000 bytes 00:22:52.406 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:22:52.406 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:22:52.406 EAL: Ask a virtual area of 0x400000000 bytes 00:22:52.406 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:22:52.406 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:22:52.406 EAL: Ask a virtual area of 0x61000 bytes 00:22:52.406 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:22:52.406 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:22:52.406 EAL: Ask a virtual area of 0x400000000 bytes 00:22:52.406 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:22:52.406 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:22:52.406 EAL: Ask a virtual area of 0x61000 bytes 00:22:52.406 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:22:52.406 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:22:52.406 EAL: Ask a virtual area of 0x400000000 bytes 00:22:52.406 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:22:52.406 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:22:52.406 EAL: Ask a virtual area of 0x61000 bytes 00:22:52.406 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:22:52.406 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:22:52.406 EAL: Ask a virtual area of 0x400000000 bytes 00:22:52.406 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:22:52.406 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:22:52.406 EAL: Hugepages will be freed exactly as allocated. 00:22:52.406 EAL: No shared files mode enabled, IPC is disabled 00:22:52.406 EAL: No shared files mode enabled, IPC is disabled 00:22:52.406 EAL: TSC frequency is ~2290000 KHz 00:22:52.406 EAL: Main lcore 0 is ready (tid=7f0213e01a00;cpuset=[0]) 00:22:52.406 EAL: Trying to obtain current memory policy. 00:22:52.406 EAL: Setting policy MPOL_PREFERRED for socket 0 00:22:52.406 EAL: Restoring previous memory policy: 0 00:22:52.406 EAL: request: mp_malloc_sync 00:22:52.406 EAL: No shared files mode enabled, IPC is disabled 00:22:52.406 EAL: Heap on socket 0 was expanded by 2MB 00:22:52.406 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:22:52.406 EAL: No PCI address specified using 'addr=' in: bus=pci 00:22:52.406 EAL: Mem event callback 'spdk:(nil)' registered 00:22:52.406 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:22:52.406 00:22:52.406 00:22:52.406 CUnit - A unit testing framework for C - Version 2.1-3 00:22:52.406 http://cunit.sourceforge.net/ 00:22:52.406 00:22:52.406 00:22:52.406 Suite: components_suite 00:22:52.406 Test: vtophys_malloc_test ...passed 00:22:52.406 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:22:52.406 EAL: Setting policy MPOL_PREFERRED for socket 0 00:22:52.406 EAL: Restoring previous memory policy: 4 00:22:52.406 EAL: Calling mem event callback 'spdk:(nil)' 00:22:52.406 EAL: request: mp_malloc_sync 00:22:52.406 EAL: No shared files mode enabled, IPC is disabled 00:22:52.406 EAL: Heap on socket 0 was expanded by 4MB 00:22:52.406 EAL: Calling mem event callback 'spdk:(nil)' 00:22:52.406 EAL: request: mp_malloc_sync 00:22:52.406 EAL: No shared files mode enabled, IPC is disabled 00:22:52.406 EAL: Heap on socket 0 was shrunk by 4MB 00:22:52.406 EAL: Trying to obtain current memory policy. 00:22:52.406 EAL: Setting policy MPOL_PREFERRED for socket 0 00:22:52.406 EAL: Restoring previous memory policy: 4 00:22:52.406 EAL: Calling mem event callback 'spdk:(nil)' 00:22:52.406 EAL: request: mp_malloc_sync 00:22:52.406 EAL: No shared files mode enabled, IPC is disabled 00:22:52.406 EAL: Heap on socket 0 was expanded by 6MB 00:22:52.406 EAL: Calling mem event callback 'spdk:(nil)' 00:22:52.406 EAL: request: mp_malloc_sync 00:22:52.406 EAL: No shared files mode enabled, IPC is disabled 00:22:52.406 EAL: Heap on socket 0 was shrunk by 6MB 00:22:52.406 EAL: Trying to obtain current memory policy. 00:22:52.406 EAL: Setting policy MPOL_PREFERRED for socket 0 00:22:52.406 EAL: Restoring previous memory policy: 4 00:22:52.406 EAL: Calling mem event callback 'spdk:(nil)' 00:22:52.406 EAL: request: mp_malloc_sync 00:22:52.406 EAL: No shared files mode enabled, IPC is disabled 00:22:52.406 EAL: Heap on socket 0 was expanded by 10MB 00:22:52.406 EAL: Calling mem event callback 'spdk:(nil)' 00:22:52.406 EAL: request: mp_malloc_sync 00:22:52.406 EAL: No shared files mode enabled, IPC is disabled 00:22:52.406 EAL: Heap on socket 0 was shrunk by 10MB 00:22:52.406 EAL: Trying to obtain current memory policy. 00:22:52.406 EAL: Setting policy MPOL_PREFERRED for socket 0 00:22:52.406 EAL: Restoring previous memory policy: 4 00:22:52.406 EAL: Calling mem event callback 'spdk:(nil)' 00:22:52.406 EAL: request: mp_malloc_sync 00:22:52.406 EAL: No shared files mode enabled, IPC is disabled 00:22:52.406 EAL: Heap on socket 0 was expanded by 18MB 00:22:52.406 EAL: Calling mem event callback 'spdk:(nil)' 00:22:52.406 EAL: request: mp_malloc_sync 00:22:52.406 EAL: No shared files mode enabled, IPC is disabled 00:22:52.406 EAL: Heap on socket 0 was shrunk by 18MB 00:22:52.406 EAL: Trying to obtain current memory policy. 00:22:52.406 EAL: Setting policy MPOL_PREFERRED for socket 0 00:22:52.406 EAL: Restoring previous memory policy: 4 00:22:52.406 EAL: Calling mem event callback 'spdk:(nil)' 00:22:52.406 EAL: request: mp_malloc_sync 00:22:52.406 EAL: No shared files mode enabled, IPC is disabled 00:22:52.406 EAL: Heap on socket 0 was expanded by 34MB 00:22:52.407 EAL: Calling mem event callback 'spdk:(nil)' 00:22:52.407 EAL: request: mp_malloc_sync 00:22:52.407 EAL: No shared files mode enabled, IPC is disabled 00:22:52.407 EAL: Heap on socket 0 was shrunk by 34MB 00:22:52.407 EAL: Trying to obtain current memory policy. 00:22:52.407 EAL: Setting policy MPOL_PREFERRED for socket 0 00:22:52.407 EAL: Restoring previous memory policy: 4 00:22:52.407 EAL: Calling mem event callback 'spdk:(nil)' 00:22:52.407 EAL: request: mp_malloc_sync 00:22:52.407 EAL: No shared files mode enabled, IPC is disabled 00:22:52.407 EAL: Heap on socket 0 was expanded by 66MB 00:22:52.407 EAL: Calling mem event callback 'spdk:(nil)' 00:22:52.407 EAL: request: mp_malloc_sync 00:22:52.407 EAL: No shared files mode enabled, IPC is disabled 00:22:52.407 EAL: Heap on socket 0 was shrunk by 66MB 00:22:52.407 EAL: Trying to obtain current memory policy. 00:22:52.407 EAL: Setting policy MPOL_PREFERRED for socket 0 00:22:52.666 EAL: Restoring previous memory policy: 4 00:22:52.666 EAL: Calling mem event callback 'spdk:(nil)' 00:22:52.666 EAL: request: mp_malloc_sync 00:22:52.666 EAL: No shared files mode enabled, IPC is disabled 00:22:52.666 EAL: Heap on socket 0 was expanded by 130MB 00:22:52.666 EAL: Calling mem event callback 'spdk:(nil)' 00:22:52.666 EAL: request: mp_malloc_sync 00:22:52.666 EAL: No shared files mode enabled, IPC is disabled 00:22:52.666 EAL: Heap on socket 0 was shrunk by 130MB 00:22:52.666 EAL: Trying to obtain current memory policy. 00:22:52.666 EAL: Setting policy MPOL_PREFERRED for socket 0 00:22:52.666 EAL: Restoring previous memory policy: 4 00:22:52.666 EAL: Calling mem event callback 'spdk:(nil)' 00:22:52.666 EAL: request: mp_malloc_sync 00:22:52.666 EAL: No shared files mode enabled, IPC is disabled 00:22:52.666 EAL: Heap on socket 0 was expanded by 258MB 00:22:52.666 EAL: Calling mem event callback 'spdk:(nil)' 00:22:52.666 EAL: request: mp_malloc_sync 00:22:52.666 EAL: No shared files mode enabled, IPC is disabled 00:22:52.666 EAL: Heap on socket 0 was shrunk by 258MB 00:22:52.666 EAL: Trying to obtain current memory policy. 00:22:52.666 EAL: Setting policy MPOL_PREFERRED for socket 0 00:22:52.933 EAL: Restoring previous memory policy: 4 00:22:52.933 EAL: Calling mem event callback 'spdk:(nil)' 00:22:52.933 EAL: request: mp_malloc_sync 00:22:52.933 EAL: No shared files mode enabled, IPC is disabled 00:22:52.933 EAL: Heap on socket 0 was expanded by 514MB 00:22:52.933 EAL: Calling mem event callback 'spdk:(nil)' 00:22:52.933 EAL: request: mp_malloc_sync 00:22:52.933 EAL: No shared files mode enabled, IPC is disabled 00:22:52.933 EAL: Heap on socket 0 was shrunk by 514MB 00:22:52.933 EAL: Trying to obtain current memory policy. 00:22:52.933 EAL: Setting policy MPOL_PREFERRED for socket 0 00:22:53.211 EAL: Restoring previous memory policy: 4 00:22:53.211 EAL: Calling mem event callback 'spdk:(nil)' 00:22:53.211 EAL: request: mp_malloc_sync 00:22:53.211 EAL: No shared files mode enabled, IPC is disabled 00:22:53.211 EAL: Heap on socket 0 was expanded by 1026MB 00:22:53.211 EAL: Calling mem event callback 'spdk:(nil)' 00:22:53.470 passed 00:22:53.470 00:22:53.470 Run Summary: Type Total Ran Passed Failed Inactive 00:22:53.470 suites 1 1 n/a 0 0 00:22:53.470 tests 2 2 2 0 0 00:22:53.470 asserts 5316 5316 5316 0 n/a 00:22:53.470 00:22:53.470 Elapsed time = 1.015 seconds 00:22:53.470 EAL: request: mp_malloc_sync 00:22:53.470 EAL: No shared files mode enabled, IPC is disabled 00:22:53.470 EAL: Heap on socket 0 was shrunk by 1026MB 00:22:53.470 EAL: Calling mem event callback 'spdk:(nil)' 00:22:53.470 EAL: request: mp_malloc_sync 00:22:53.470 EAL: No shared files mode enabled, IPC is disabled 00:22:53.470 EAL: Heap on socket 0 was shrunk by 2MB 00:22:53.470 EAL: No shared files mode enabled, IPC is disabled 00:22:53.470 EAL: No shared files mode enabled, IPC is disabled 00:22:53.470 EAL: No shared files mode enabled, IPC is disabled 00:22:53.470 00:22:53.470 real 0m1.218s 00:22:53.470 user 0m0.665s 00:22:53.470 sys 0m0.422s 00:22:53.470 08:19:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:53.470 08:19:26 -- common/autotest_common.sh@10 -- # set +x 00:22:53.470 ************************************ 00:22:53.470 END TEST env_vtophys 00:22:53.470 ************************************ 00:22:53.470 08:19:26 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:22:53.470 08:19:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:53.470 08:19:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:53.470 08:19:26 -- common/autotest_common.sh@10 -- # set +x 00:22:53.470 ************************************ 00:22:53.470 START TEST env_pci 00:22:53.470 ************************************ 00:22:53.470 08:19:26 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:22:53.470 00:22:53.470 00:22:53.470 CUnit - A unit testing framework for C - Version 2.1-3 00:22:53.470 http://cunit.sourceforge.net/ 00:22:53.470 00:22:53.470 00:22:53.470 Suite: pci 00:22:53.470 Test: pci_hook ...[2024-04-17 08:19:26.774280] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 53977 has claimed it 00:22:53.470 passed 00:22:53.470 00:22:53.470 Run Summary: Type Total Ran Passed Failed Inactive 00:22:53.470 suites 1 1 n/a 0 0 00:22:53.470 tests 1 1 1 0 0 00:22:53.470 asserts 25 25 25 0 n/a 00:22:53.470 00:22:53.470 Elapsed time = 0.004 seconds 00:22:53.470 EAL: Cannot find device (10000:00:01.0) 00:22:53.470 EAL: Failed to attach device on primary process 00:22:53.470 00:22:53.470 real 0m0.029s 00:22:53.470 user 0m0.012s 00:22:53.470 sys 0m0.017s 00:22:53.470 08:19:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:53.470 08:19:26 -- common/autotest_common.sh@10 -- # set +x 00:22:53.470 ************************************ 00:22:53.470 END TEST env_pci 00:22:53.470 ************************************ 00:22:53.729 08:19:26 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:22:53.729 08:19:26 -- env/env.sh@15 -- # uname 00:22:53.729 08:19:26 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:22:53.729 08:19:26 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:22:53.729 08:19:26 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:22:53.729 08:19:26 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:22:53.729 08:19:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:53.729 08:19:26 -- common/autotest_common.sh@10 -- # set +x 00:22:53.729 ************************************ 00:22:53.729 START TEST env_dpdk_post_init 00:22:53.729 ************************************ 00:22:53.729 08:19:26 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:22:53.729 EAL: Detected CPU lcores: 10 00:22:53.729 EAL: Detected NUMA nodes: 1 00:22:53.729 EAL: Detected shared linkage of DPDK 00:22:53.729 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:22:53.729 EAL: Selected IOVA mode 'PA' 00:22:53.729 TELEMETRY: No legacy callbacks, legacy socket not created 00:22:53.729 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:22:53.729 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:07.0 (socket -1) 00:22:53.729 Starting DPDK initialization... 00:22:53.729 Starting SPDK post initialization... 00:22:53.729 SPDK NVMe probe 00:22:53.729 Attaching to 0000:00:06.0 00:22:53.729 Attaching to 0000:00:07.0 00:22:53.729 Attached to 0000:00:06.0 00:22:53.729 Attached to 0000:00:07.0 00:22:53.729 Cleaning up... 00:22:53.729 00:22:53.729 real 0m0.185s 00:22:53.729 user 0m0.052s 00:22:53.729 sys 0m0.034s 00:22:53.729 08:19:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:53.729 08:19:27 -- common/autotest_common.sh@10 -- # set +x 00:22:53.729 ************************************ 00:22:53.729 END TEST env_dpdk_post_init 00:22:53.729 ************************************ 00:22:53.988 08:19:27 -- env/env.sh@26 -- # uname 00:22:53.988 08:19:27 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:22:53.988 08:19:27 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:22:53.988 08:19:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:53.988 08:19:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:53.988 08:19:27 -- common/autotest_common.sh@10 -- # set +x 00:22:53.988 ************************************ 00:22:53.988 START TEST env_mem_callbacks 00:22:53.988 ************************************ 00:22:53.988 08:19:27 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:22:53.988 EAL: Detected CPU lcores: 10 00:22:53.988 EAL: Detected NUMA nodes: 1 00:22:53.988 EAL: Detected shared linkage of DPDK 00:22:53.988 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:22:53.988 EAL: Selected IOVA mode 'PA' 00:22:53.988 00:22:53.988 00:22:53.988 CUnit - A unit testing framework for C - Version 2.1-3 00:22:53.988 http://cunit.sourceforge.net/ 00:22:53.988 00:22:53.988 TELEMETRY: No legacy callbacks, legacy socket not created 00:22:53.988 00:22:53.988 Suite: memory 00:22:53.988 Test: test ... 00:22:53.988 register 0x200000200000 2097152 00:22:53.988 malloc 3145728 00:22:53.988 register 0x200000400000 4194304 00:22:53.988 buf 0x200000500000 len 3145728 PASSED 00:22:53.988 malloc 64 00:22:53.988 buf 0x2000004fff40 len 64 PASSED 00:22:53.988 malloc 4194304 00:22:53.988 register 0x200000800000 6291456 00:22:53.988 buf 0x200000a00000 len 4194304 PASSED 00:22:53.988 free 0x200000500000 3145728 00:22:53.988 free 0x2000004fff40 64 00:22:53.988 unregister 0x200000400000 4194304 PASSED 00:22:53.988 free 0x200000a00000 4194304 00:22:53.988 unregister 0x200000800000 6291456 PASSED 00:22:53.988 malloc 8388608 00:22:53.988 register 0x200000400000 10485760 00:22:53.988 buf 0x200000600000 len 8388608 PASSED 00:22:53.988 free 0x200000600000 8388608 00:22:53.988 unregister 0x200000400000 10485760 PASSED 00:22:53.988 passed 00:22:53.988 00:22:53.988 Run Summary: Type Total Ran Passed Failed Inactive 00:22:53.988 suites 1 1 n/a 0 0 00:22:53.988 tests 1 1 1 0 0 00:22:53.988 asserts 15 15 15 0 n/a 00:22:53.988 00:22:53.988 Elapsed time = 0.011 seconds 00:22:53.988 00:22:53.988 real 0m0.148s 00:22:53.988 user 0m0.019s 00:22:53.988 sys 0m0.027s 00:22:53.988 08:19:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:53.988 08:19:27 -- common/autotest_common.sh@10 -- # set +x 00:22:53.988 ************************************ 00:22:53.988 END TEST env_mem_callbacks 00:22:53.988 ************************************ 00:22:53.988 00:22:53.988 real 0m2.125s 00:22:53.988 user 0m1.039s 00:22:53.988 sys 0m0.765s 00:22:53.988 08:19:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:53.988 08:19:27 -- common/autotest_common.sh@10 -- # set +x 00:22:53.988 ************************************ 00:22:53.988 END TEST env 00:22:53.988 ************************************ 00:22:54.248 08:19:27 -- spdk/autotest.sh@176 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:22:54.248 08:19:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:54.248 08:19:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:54.248 08:19:27 -- common/autotest_common.sh@10 -- # set +x 00:22:54.248 ************************************ 00:22:54.248 START TEST rpc 00:22:54.248 ************************************ 00:22:54.248 08:19:27 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:22:54.248 * Looking for test storage... 00:22:54.248 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:22:54.248 08:19:27 -- rpc/rpc.sh@65 -- # spdk_pid=54080 00:22:54.248 08:19:27 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:22:54.248 08:19:27 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:22:54.248 08:19:27 -- rpc/rpc.sh@67 -- # waitforlisten 54080 00:22:54.248 08:19:27 -- common/autotest_common.sh@819 -- # '[' -z 54080 ']' 00:22:54.248 08:19:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:54.248 08:19:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:54.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:54.248 08:19:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:54.248 08:19:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:54.248 08:19:27 -- common/autotest_common.sh@10 -- # set +x 00:22:54.248 [2024-04-17 08:19:27.502564] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:22:54.248 [2024-04-17 08:19:27.502647] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54080 ] 00:22:54.506 [2024-04-17 08:19:27.641569] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:54.506 [2024-04-17 08:19:27.746827] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:54.506 [2024-04-17 08:19:27.746997] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:22:54.506 [2024-04-17 08:19:27.747008] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 54080' to capture a snapshot of events at runtime. 00:22:54.506 [2024-04-17 08:19:27.747015] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid54080 for offline analysis/debug. 00:22:54.506 [2024-04-17 08:19:27.747038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:55.072 08:19:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:55.072 08:19:28 -- common/autotest_common.sh@852 -- # return 0 00:22:55.072 08:19:28 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:22:55.072 08:19:28 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:22:55.072 08:19:28 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:22:55.072 08:19:28 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:22:55.072 08:19:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:55.072 08:19:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:55.072 08:19:28 -- common/autotest_common.sh@10 -- # set +x 00:22:55.072 ************************************ 00:22:55.072 START TEST rpc_integrity 00:22:55.072 ************************************ 00:22:55.072 08:19:28 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:22:55.072 08:19:28 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:22:55.072 08:19:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.072 08:19:28 -- common/autotest_common.sh@10 -- # set +x 00:22:55.331 08:19:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.331 08:19:28 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:22:55.331 08:19:28 -- rpc/rpc.sh@13 -- # jq length 00:22:55.331 08:19:28 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:22:55.331 08:19:28 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:22:55.331 08:19:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.331 08:19:28 -- common/autotest_common.sh@10 -- # set +x 00:22:55.331 08:19:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.331 08:19:28 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:22:55.331 08:19:28 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:22:55.331 08:19:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.331 08:19:28 -- common/autotest_common.sh@10 -- # set +x 00:22:55.331 08:19:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.331 08:19:28 -- rpc/rpc.sh@16 -- # bdevs='[ 00:22:55.331 { 00:22:55.331 "name": "Malloc0", 00:22:55.331 "aliases": [ 00:22:55.331 "13d44978-1a2b-479f-82b4-aa24ab7761eb" 00:22:55.331 ], 00:22:55.331 "product_name": "Malloc disk", 00:22:55.331 "block_size": 512, 00:22:55.331 "num_blocks": 16384, 00:22:55.331 "uuid": "13d44978-1a2b-479f-82b4-aa24ab7761eb", 00:22:55.331 "assigned_rate_limits": { 00:22:55.331 "rw_ios_per_sec": 0, 00:22:55.331 "rw_mbytes_per_sec": 0, 00:22:55.331 "r_mbytes_per_sec": 0, 00:22:55.331 "w_mbytes_per_sec": 0 00:22:55.331 }, 00:22:55.331 "claimed": false, 00:22:55.331 "zoned": false, 00:22:55.331 "supported_io_types": { 00:22:55.331 "read": true, 00:22:55.331 "write": true, 00:22:55.331 "unmap": true, 00:22:55.331 "write_zeroes": true, 00:22:55.331 "flush": true, 00:22:55.331 "reset": true, 00:22:55.331 "compare": false, 00:22:55.331 "compare_and_write": false, 00:22:55.331 "abort": true, 00:22:55.331 "nvme_admin": false, 00:22:55.331 "nvme_io": false 00:22:55.331 }, 00:22:55.331 "memory_domains": [ 00:22:55.331 { 00:22:55.331 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:55.331 "dma_device_type": 2 00:22:55.331 } 00:22:55.331 ], 00:22:55.331 "driver_specific": {} 00:22:55.331 } 00:22:55.331 ]' 00:22:55.331 08:19:28 -- rpc/rpc.sh@17 -- # jq length 00:22:55.331 08:19:28 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:22:55.331 08:19:28 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:22:55.331 08:19:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.331 08:19:28 -- common/autotest_common.sh@10 -- # set +x 00:22:55.331 [2024-04-17 08:19:28.538499] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:22:55.331 [2024-04-17 08:19:28.538556] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:55.331 [2024-04-17 08:19:28.538573] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xdeb3f0 00:22:55.331 [2024-04-17 08:19:28.538580] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:55.331 [2024-04-17 08:19:28.540133] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:55.331 [2024-04-17 08:19:28.540171] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:22:55.331 Passthru0 00:22:55.331 08:19:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.331 08:19:28 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:22:55.331 08:19:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.331 08:19:28 -- common/autotest_common.sh@10 -- # set +x 00:22:55.331 08:19:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.331 08:19:28 -- rpc/rpc.sh@20 -- # bdevs='[ 00:22:55.331 { 00:22:55.331 "name": "Malloc0", 00:22:55.331 "aliases": [ 00:22:55.331 "13d44978-1a2b-479f-82b4-aa24ab7761eb" 00:22:55.331 ], 00:22:55.331 "product_name": "Malloc disk", 00:22:55.331 "block_size": 512, 00:22:55.331 "num_blocks": 16384, 00:22:55.331 "uuid": "13d44978-1a2b-479f-82b4-aa24ab7761eb", 00:22:55.331 "assigned_rate_limits": { 00:22:55.331 "rw_ios_per_sec": 0, 00:22:55.331 "rw_mbytes_per_sec": 0, 00:22:55.331 "r_mbytes_per_sec": 0, 00:22:55.331 "w_mbytes_per_sec": 0 00:22:55.331 }, 00:22:55.331 "claimed": true, 00:22:55.331 "claim_type": "exclusive_write", 00:22:55.331 "zoned": false, 00:22:55.331 "supported_io_types": { 00:22:55.331 "read": true, 00:22:55.331 "write": true, 00:22:55.331 "unmap": true, 00:22:55.331 "write_zeroes": true, 00:22:55.331 "flush": true, 00:22:55.331 "reset": true, 00:22:55.331 "compare": false, 00:22:55.331 "compare_and_write": false, 00:22:55.331 "abort": true, 00:22:55.331 "nvme_admin": false, 00:22:55.331 "nvme_io": false 00:22:55.331 }, 00:22:55.331 "memory_domains": [ 00:22:55.331 { 00:22:55.331 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:55.331 "dma_device_type": 2 00:22:55.331 } 00:22:55.331 ], 00:22:55.331 "driver_specific": {} 00:22:55.331 }, 00:22:55.331 { 00:22:55.331 "name": "Passthru0", 00:22:55.331 "aliases": [ 00:22:55.331 "67976b0f-bc07-5a3f-a32d-ed53e748947b" 00:22:55.331 ], 00:22:55.331 "product_name": "passthru", 00:22:55.331 "block_size": 512, 00:22:55.331 "num_blocks": 16384, 00:22:55.331 "uuid": "67976b0f-bc07-5a3f-a32d-ed53e748947b", 00:22:55.331 "assigned_rate_limits": { 00:22:55.331 "rw_ios_per_sec": 0, 00:22:55.331 "rw_mbytes_per_sec": 0, 00:22:55.331 "r_mbytes_per_sec": 0, 00:22:55.331 "w_mbytes_per_sec": 0 00:22:55.331 }, 00:22:55.331 "claimed": false, 00:22:55.331 "zoned": false, 00:22:55.331 "supported_io_types": { 00:22:55.331 "read": true, 00:22:55.331 "write": true, 00:22:55.331 "unmap": true, 00:22:55.331 "write_zeroes": true, 00:22:55.331 "flush": true, 00:22:55.331 "reset": true, 00:22:55.331 "compare": false, 00:22:55.331 "compare_and_write": false, 00:22:55.331 "abort": true, 00:22:55.331 "nvme_admin": false, 00:22:55.331 "nvme_io": false 00:22:55.331 }, 00:22:55.331 "memory_domains": [ 00:22:55.331 { 00:22:55.331 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:55.331 "dma_device_type": 2 00:22:55.331 } 00:22:55.331 ], 00:22:55.331 "driver_specific": { 00:22:55.331 "passthru": { 00:22:55.331 "name": "Passthru0", 00:22:55.331 "base_bdev_name": "Malloc0" 00:22:55.331 } 00:22:55.331 } 00:22:55.331 } 00:22:55.331 ]' 00:22:55.331 08:19:28 -- rpc/rpc.sh@21 -- # jq length 00:22:55.331 08:19:28 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:22:55.331 08:19:28 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:22:55.331 08:19:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.331 08:19:28 -- common/autotest_common.sh@10 -- # set +x 00:22:55.331 08:19:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.331 08:19:28 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:22:55.331 08:19:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.331 08:19:28 -- common/autotest_common.sh@10 -- # set +x 00:22:55.331 08:19:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.331 08:19:28 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:22:55.331 08:19:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.331 08:19:28 -- common/autotest_common.sh@10 -- # set +x 00:22:55.331 08:19:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.332 08:19:28 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:22:55.332 08:19:28 -- rpc/rpc.sh@26 -- # jq length 00:22:55.590 08:19:28 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:22:55.590 00:22:55.590 real 0m0.300s 00:22:55.590 user 0m0.190s 00:22:55.590 sys 0m0.043s 00:22:55.590 08:19:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:55.590 08:19:28 -- common/autotest_common.sh@10 -- # set +x 00:22:55.590 ************************************ 00:22:55.590 END TEST rpc_integrity 00:22:55.590 ************************************ 00:22:55.590 08:19:28 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:22:55.590 08:19:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:55.590 08:19:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:55.590 08:19:28 -- common/autotest_common.sh@10 -- # set +x 00:22:55.590 ************************************ 00:22:55.590 START TEST rpc_plugins 00:22:55.590 ************************************ 00:22:55.590 08:19:28 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:22:55.590 08:19:28 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:22:55.590 08:19:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.590 08:19:28 -- common/autotest_common.sh@10 -- # set +x 00:22:55.590 08:19:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.590 08:19:28 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:22:55.590 08:19:28 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:22:55.590 08:19:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.590 08:19:28 -- common/autotest_common.sh@10 -- # set +x 00:22:55.590 08:19:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.590 08:19:28 -- rpc/rpc.sh@31 -- # bdevs='[ 00:22:55.590 { 00:22:55.590 "name": "Malloc1", 00:22:55.590 "aliases": [ 00:22:55.590 "4ea125d9-da52-4c3d-83e5-e1f40b72d020" 00:22:55.590 ], 00:22:55.590 "product_name": "Malloc disk", 00:22:55.590 "block_size": 4096, 00:22:55.590 "num_blocks": 256, 00:22:55.590 "uuid": "4ea125d9-da52-4c3d-83e5-e1f40b72d020", 00:22:55.590 "assigned_rate_limits": { 00:22:55.590 "rw_ios_per_sec": 0, 00:22:55.590 "rw_mbytes_per_sec": 0, 00:22:55.590 "r_mbytes_per_sec": 0, 00:22:55.590 "w_mbytes_per_sec": 0 00:22:55.590 }, 00:22:55.590 "claimed": false, 00:22:55.590 "zoned": false, 00:22:55.590 "supported_io_types": { 00:22:55.590 "read": true, 00:22:55.590 "write": true, 00:22:55.590 "unmap": true, 00:22:55.590 "write_zeroes": true, 00:22:55.590 "flush": true, 00:22:55.590 "reset": true, 00:22:55.590 "compare": false, 00:22:55.590 "compare_and_write": false, 00:22:55.590 "abort": true, 00:22:55.590 "nvme_admin": false, 00:22:55.590 "nvme_io": false 00:22:55.590 }, 00:22:55.590 "memory_domains": [ 00:22:55.590 { 00:22:55.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:55.590 "dma_device_type": 2 00:22:55.590 } 00:22:55.590 ], 00:22:55.590 "driver_specific": {} 00:22:55.590 } 00:22:55.590 ]' 00:22:55.590 08:19:28 -- rpc/rpc.sh@32 -- # jq length 00:22:55.590 08:19:28 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:22:55.590 08:19:28 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:22:55.590 08:19:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.590 08:19:28 -- common/autotest_common.sh@10 -- # set +x 00:22:55.590 08:19:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.590 08:19:28 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:22:55.590 08:19:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.590 08:19:28 -- common/autotest_common.sh@10 -- # set +x 00:22:55.590 08:19:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.590 08:19:28 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:22:55.590 08:19:28 -- rpc/rpc.sh@36 -- # jq length 00:22:55.590 08:19:28 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:22:55.590 00:22:55.590 real 0m0.143s 00:22:55.590 user 0m0.086s 00:22:55.590 sys 0m0.024s 00:22:55.590 08:19:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:55.590 08:19:28 -- common/autotest_common.sh@10 -- # set +x 00:22:55.590 ************************************ 00:22:55.590 END TEST rpc_plugins 00:22:55.590 ************************************ 00:22:55.849 08:19:28 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:22:55.849 08:19:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:55.849 08:19:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:55.849 08:19:28 -- common/autotest_common.sh@10 -- # set +x 00:22:55.849 ************************************ 00:22:55.849 START TEST rpc_trace_cmd_test 00:22:55.849 ************************************ 00:22:55.849 08:19:28 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:22:55.849 08:19:28 -- rpc/rpc.sh@40 -- # local info 00:22:55.849 08:19:28 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:22:55.849 08:19:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.849 08:19:28 -- common/autotest_common.sh@10 -- # set +x 00:22:55.849 08:19:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.849 08:19:28 -- rpc/rpc.sh@42 -- # info='{ 00:22:55.849 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid54080", 00:22:55.849 "tpoint_group_mask": "0x8", 00:22:55.849 "iscsi_conn": { 00:22:55.849 "mask": "0x2", 00:22:55.849 "tpoint_mask": "0x0" 00:22:55.849 }, 00:22:55.849 "scsi": { 00:22:55.849 "mask": "0x4", 00:22:55.849 "tpoint_mask": "0x0" 00:22:55.849 }, 00:22:55.849 "bdev": { 00:22:55.849 "mask": "0x8", 00:22:55.849 "tpoint_mask": "0xffffffffffffffff" 00:22:55.849 }, 00:22:55.849 "nvmf_rdma": { 00:22:55.849 "mask": "0x10", 00:22:55.849 "tpoint_mask": "0x0" 00:22:55.849 }, 00:22:55.849 "nvmf_tcp": { 00:22:55.849 "mask": "0x20", 00:22:55.849 "tpoint_mask": "0x0" 00:22:55.849 }, 00:22:55.849 "ftl": { 00:22:55.849 "mask": "0x40", 00:22:55.849 "tpoint_mask": "0x0" 00:22:55.849 }, 00:22:55.849 "blobfs": { 00:22:55.849 "mask": "0x80", 00:22:55.849 "tpoint_mask": "0x0" 00:22:55.849 }, 00:22:55.849 "dsa": { 00:22:55.849 "mask": "0x200", 00:22:55.849 "tpoint_mask": "0x0" 00:22:55.849 }, 00:22:55.849 "thread": { 00:22:55.849 "mask": "0x400", 00:22:55.849 "tpoint_mask": "0x0" 00:22:55.849 }, 00:22:55.849 "nvme_pcie": { 00:22:55.849 "mask": "0x800", 00:22:55.849 "tpoint_mask": "0x0" 00:22:55.849 }, 00:22:55.849 "iaa": { 00:22:55.849 "mask": "0x1000", 00:22:55.849 "tpoint_mask": "0x0" 00:22:55.849 }, 00:22:55.849 "nvme_tcp": { 00:22:55.849 "mask": "0x2000", 00:22:55.849 "tpoint_mask": "0x0" 00:22:55.849 }, 00:22:55.849 "bdev_nvme": { 00:22:55.849 "mask": "0x4000", 00:22:55.849 "tpoint_mask": "0x0" 00:22:55.849 } 00:22:55.849 }' 00:22:55.849 08:19:28 -- rpc/rpc.sh@43 -- # jq length 00:22:55.849 08:19:29 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:22:55.850 08:19:29 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:22:55.850 08:19:29 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:22:55.850 08:19:29 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:22:55.850 08:19:29 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:22:55.850 08:19:29 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:22:55.850 08:19:29 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:22:55.850 08:19:29 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:22:55.850 ************************************ 00:22:55.850 END TEST rpc_trace_cmd_test 00:22:55.850 ************************************ 00:22:55.850 08:19:29 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:22:55.850 00:22:55.850 real 0m0.228s 00:22:55.850 user 0m0.189s 00:22:55.850 sys 0m0.031s 00:22:55.850 08:19:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:55.850 08:19:29 -- common/autotest_common.sh@10 -- # set +x 00:22:56.108 08:19:29 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:22:56.108 08:19:29 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:22:56.108 08:19:29 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:22:56.108 08:19:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:56.108 08:19:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:56.108 08:19:29 -- common/autotest_common.sh@10 -- # set +x 00:22:56.108 ************************************ 00:22:56.108 START TEST rpc_daemon_integrity 00:22:56.108 ************************************ 00:22:56.108 08:19:29 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:22:56.108 08:19:29 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:22:56.108 08:19:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:56.108 08:19:29 -- common/autotest_common.sh@10 -- # set +x 00:22:56.108 08:19:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:56.108 08:19:29 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:22:56.108 08:19:29 -- rpc/rpc.sh@13 -- # jq length 00:22:56.108 08:19:29 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:22:56.108 08:19:29 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:22:56.108 08:19:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:56.108 08:19:29 -- common/autotest_common.sh@10 -- # set +x 00:22:56.108 08:19:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:56.108 08:19:29 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:22:56.108 08:19:29 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:22:56.108 08:19:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:56.109 08:19:29 -- common/autotest_common.sh@10 -- # set +x 00:22:56.109 08:19:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:56.109 08:19:29 -- rpc/rpc.sh@16 -- # bdevs='[ 00:22:56.109 { 00:22:56.109 "name": "Malloc2", 00:22:56.109 "aliases": [ 00:22:56.109 "9ed89273-bdc9-467d-80a7-7be01ca9b642" 00:22:56.109 ], 00:22:56.109 "product_name": "Malloc disk", 00:22:56.109 "block_size": 512, 00:22:56.109 "num_blocks": 16384, 00:22:56.109 "uuid": "9ed89273-bdc9-467d-80a7-7be01ca9b642", 00:22:56.109 "assigned_rate_limits": { 00:22:56.109 "rw_ios_per_sec": 0, 00:22:56.109 "rw_mbytes_per_sec": 0, 00:22:56.109 "r_mbytes_per_sec": 0, 00:22:56.109 "w_mbytes_per_sec": 0 00:22:56.109 }, 00:22:56.109 "claimed": false, 00:22:56.109 "zoned": false, 00:22:56.109 "supported_io_types": { 00:22:56.109 "read": true, 00:22:56.109 "write": true, 00:22:56.109 "unmap": true, 00:22:56.109 "write_zeroes": true, 00:22:56.109 "flush": true, 00:22:56.109 "reset": true, 00:22:56.109 "compare": false, 00:22:56.109 "compare_and_write": false, 00:22:56.109 "abort": true, 00:22:56.109 "nvme_admin": false, 00:22:56.109 "nvme_io": false 00:22:56.109 }, 00:22:56.109 "memory_domains": [ 00:22:56.109 { 00:22:56.109 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:56.109 "dma_device_type": 2 00:22:56.109 } 00:22:56.109 ], 00:22:56.109 "driver_specific": {} 00:22:56.109 } 00:22:56.109 ]' 00:22:56.109 08:19:29 -- rpc/rpc.sh@17 -- # jq length 00:22:56.109 08:19:29 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:22:56.109 08:19:29 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:22:56.109 08:19:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:56.109 08:19:29 -- common/autotest_common.sh@10 -- # set +x 00:22:56.109 [2024-04-17 08:19:29.373187] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:22:56.109 [2024-04-17 08:19:29.373246] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:56.109 [2024-04-17 08:19:29.373263] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xdeb030 00:22:56.109 [2024-04-17 08:19:29.373270] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:56.109 [2024-04-17 08:19:29.374725] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:56.109 [2024-04-17 08:19:29.374759] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:22:56.109 Passthru0 00:22:56.109 08:19:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:56.109 08:19:29 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:22:56.109 08:19:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:56.109 08:19:29 -- common/autotest_common.sh@10 -- # set +x 00:22:56.109 08:19:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:56.109 08:19:29 -- rpc/rpc.sh@20 -- # bdevs='[ 00:22:56.109 { 00:22:56.109 "name": "Malloc2", 00:22:56.109 "aliases": [ 00:22:56.109 "9ed89273-bdc9-467d-80a7-7be01ca9b642" 00:22:56.109 ], 00:22:56.109 "product_name": "Malloc disk", 00:22:56.109 "block_size": 512, 00:22:56.109 "num_blocks": 16384, 00:22:56.109 "uuid": "9ed89273-bdc9-467d-80a7-7be01ca9b642", 00:22:56.109 "assigned_rate_limits": { 00:22:56.109 "rw_ios_per_sec": 0, 00:22:56.109 "rw_mbytes_per_sec": 0, 00:22:56.109 "r_mbytes_per_sec": 0, 00:22:56.109 "w_mbytes_per_sec": 0 00:22:56.109 }, 00:22:56.109 "claimed": true, 00:22:56.109 "claim_type": "exclusive_write", 00:22:56.109 "zoned": false, 00:22:56.109 "supported_io_types": { 00:22:56.109 "read": true, 00:22:56.109 "write": true, 00:22:56.109 "unmap": true, 00:22:56.109 "write_zeroes": true, 00:22:56.109 "flush": true, 00:22:56.109 "reset": true, 00:22:56.109 "compare": false, 00:22:56.109 "compare_and_write": false, 00:22:56.109 "abort": true, 00:22:56.109 "nvme_admin": false, 00:22:56.109 "nvme_io": false 00:22:56.109 }, 00:22:56.109 "memory_domains": [ 00:22:56.109 { 00:22:56.109 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:56.109 "dma_device_type": 2 00:22:56.109 } 00:22:56.109 ], 00:22:56.109 "driver_specific": {} 00:22:56.109 }, 00:22:56.109 { 00:22:56.109 "name": "Passthru0", 00:22:56.109 "aliases": [ 00:22:56.109 "7bf3a7c9-7786-5816-8e03-aea803f993b2" 00:22:56.109 ], 00:22:56.109 "product_name": "passthru", 00:22:56.109 "block_size": 512, 00:22:56.109 "num_blocks": 16384, 00:22:56.109 "uuid": "7bf3a7c9-7786-5816-8e03-aea803f993b2", 00:22:56.109 "assigned_rate_limits": { 00:22:56.109 "rw_ios_per_sec": 0, 00:22:56.109 "rw_mbytes_per_sec": 0, 00:22:56.109 "r_mbytes_per_sec": 0, 00:22:56.109 "w_mbytes_per_sec": 0 00:22:56.109 }, 00:22:56.109 "claimed": false, 00:22:56.109 "zoned": false, 00:22:56.109 "supported_io_types": { 00:22:56.109 "read": true, 00:22:56.109 "write": true, 00:22:56.109 "unmap": true, 00:22:56.109 "write_zeroes": true, 00:22:56.109 "flush": true, 00:22:56.109 "reset": true, 00:22:56.109 "compare": false, 00:22:56.109 "compare_and_write": false, 00:22:56.109 "abort": true, 00:22:56.109 "nvme_admin": false, 00:22:56.109 "nvme_io": false 00:22:56.109 }, 00:22:56.109 "memory_domains": [ 00:22:56.109 { 00:22:56.109 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:56.109 "dma_device_type": 2 00:22:56.109 } 00:22:56.109 ], 00:22:56.109 "driver_specific": { 00:22:56.109 "passthru": { 00:22:56.109 "name": "Passthru0", 00:22:56.109 "base_bdev_name": "Malloc2" 00:22:56.109 } 00:22:56.109 } 00:22:56.109 } 00:22:56.109 ]' 00:22:56.109 08:19:29 -- rpc/rpc.sh@21 -- # jq length 00:22:56.369 08:19:29 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:22:56.369 08:19:29 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:22:56.369 08:19:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:56.369 08:19:29 -- common/autotest_common.sh@10 -- # set +x 00:22:56.369 08:19:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:56.369 08:19:29 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:22:56.369 08:19:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:56.369 08:19:29 -- common/autotest_common.sh@10 -- # set +x 00:22:56.369 08:19:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:56.369 08:19:29 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:22:56.369 08:19:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:56.369 08:19:29 -- common/autotest_common.sh@10 -- # set +x 00:22:56.369 08:19:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:56.369 08:19:29 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:22:56.369 08:19:29 -- rpc/rpc.sh@26 -- # jq length 00:22:56.369 08:19:29 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:22:56.369 00:22:56.369 real 0m0.302s 00:22:56.369 user 0m0.181s 00:22:56.369 sys 0m0.052s 00:22:56.369 08:19:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:56.369 08:19:29 -- common/autotest_common.sh@10 -- # set +x 00:22:56.369 ************************************ 00:22:56.369 END TEST rpc_daemon_integrity 00:22:56.369 ************************************ 00:22:56.369 08:19:29 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:22:56.369 08:19:29 -- rpc/rpc.sh@84 -- # killprocess 54080 00:22:56.369 08:19:29 -- common/autotest_common.sh@926 -- # '[' -z 54080 ']' 00:22:56.369 08:19:29 -- common/autotest_common.sh@930 -- # kill -0 54080 00:22:56.369 08:19:29 -- common/autotest_common.sh@931 -- # uname 00:22:56.369 08:19:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:56.369 08:19:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 54080 00:22:56.369 08:19:29 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:56.369 08:19:29 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:56.369 killing process with pid 54080 00:22:56.369 08:19:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 54080' 00:22:56.369 08:19:29 -- common/autotest_common.sh@945 -- # kill 54080 00:22:56.369 08:19:29 -- common/autotest_common.sh@950 -- # wait 54080 00:22:56.938 00:22:56.938 real 0m2.637s 00:22:56.938 user 0m3.340s 00:22:56.938 sys 0m0.668s 00:22:56.938 08:19:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:56.938 08:19:29 -- common/autotest_common.sh@10 -- # set +x 00:22:56.938 ************************************ 00:22:56.938 END TEST rpc 00:22:56.938 ************************************ 00:22:56.938 08:19:30 -- spdk/autotest.sh@177 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:22:56.938 08:19:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:56.938 08:19:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:56.938 08:19:30 -- common/autotest_common.sh@10 -- # set +x 00:22:56.938 ************************************ 00:22:56.938 START TEST rpc_client 00:22:56.938 ************************************ 00:22:56.938 08:19:30 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:22:56.938 * Looking for test storage... 00:22:56.938 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:22:56.938 08:19:30 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:22:56.938 OK 00:22:56.938 08:19:30 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:22:56.938 00:22:56.938 real 0m0.144s 00:22:56.938 user 0m0.053s 00:22:56.938 sys 0m0.099s 00:22:56.938 08:19:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:56.938 08:19:30 -- common/autotest_common.sh@10 -- # set +x 00:22:56.938 ************************************ 00:22:56.938 END TEST rpc_client 00:22:56.938 ************************************ 00:22:56.938 08:19:30 -- spdk/autotest.sh@178 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:22:56.938 08:19:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:56.938 08:19:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:56.938 08:19:30 -- common/autotest_common.sh@10 -- # set +x 00:22:56.938 ************************************ 00:22:56.938 START TEST json_config 00:22:56.938 ************************************ 00:22:56.938 08:19:30 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:22:57.223 08:19:30 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:57.223 08:19:30 -- nvmf/common.sh@7 -- # uname -s 00:22:57.224 08:19:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:57.224 08:19:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:57.224 08:19:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:57.224 08:19:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:57.224 08:19:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:57.224 08:19:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:57.224 08:19:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:57.224 08:19:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:57.224 08:19:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:57.224 08:19:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:57.224 08:19:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ce38300-f67f-48af-81f9-d51a7c54746d 00:22:57.224 08:19:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ce38300-f67f-48af-81f9-d51a7c54746d 00:22:57.224 08:19:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:57.224 08:19:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:57.224 08:19:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:22:57.224 08:19:30 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:57.224 08:19:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:57.224 08:19:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:57.224 08:19:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:57.224 08:19:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:57.224 08:19:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:57.224 08:19:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:57.224 08:19:30 -- paths/export.sh@5 -- # export PATH 00:22:57.224 08:19:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:57.224 08:19:30 -- nvmf/common.sh@46 -- # : 0 00:22:57.224 08:19:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:57.224 08:19:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:57.224 08:19:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:57.224 08:19:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:57.224 08:19:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:57.224 08:19:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:57.224 08:19:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:57.224 08:19:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:57.224 08:19:30 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:22:57.224 08:19:30 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:22:57.224 08:19:30 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:22:57.224 08:19:30 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:22:57.224 08:19:30 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:22:57.224 08:19:30 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:22:57.224 08:19:30 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:22:57.224 08:19:30 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:22:57.224 08:19:30 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:22:57.224 08:19:30 -- json_config/json_config.sh@32 -- # declare -A app_params 00:22:57.224 08:19:30 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:22:57.224 08:19:30 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:22:57.224 08:19:30 -- json_config/json_config.sh@43 -- # last_event_id=0 00:22:57.224 08:19:30 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:22:57.224 INFO: JSON configuration test init 00:22:57.224 08:19:30 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:22:57.224 08:19:30 -- json_config/json_config.sh@420 -- # json_config_test_init 00:22:57.224 08:19:30 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:22:57.224 08:19:30 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:57.224 08:19:30 -- common/autotest_common.sh@10 -- # set +x 00:22:57.224 08:19:30 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:22:57.224 08:19:30 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:57.224 08:19:30 -- common/autotest_common.sh@10 -- # set +x 00:22:57.224 08:19:30 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:22:57.224 08:19:30 -- json_config/json_config.sh@98 -- # local app=target 00:22:57.224 08:19:30 -- json_config/json_config.sh@99 -- # shift 00:22:57.224 08:19:30 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:22:57.224 08:19:30 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:22:57.224 08:19:30 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:22:57.224 08:19:30 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:22:57.224 08:19:30 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:22:57.224 08:19:30 -- json_config/json_config.sh@111 -- # app_pid[$app]=54317 00:22:57.224 Waiting for target to run... 00:22:57.224 08:19:30 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:22:57.224 08:19:30 -- json_config/json_config.sh@114 -- # waitforlisten 54317 /var/tmp/spdk_tgt.sock 00:22:57.224 08:19:30 -- common/autotest_common.sh@819 -- # '[' -z 54317 ']' 00:22:57.224 08:19:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:22:57.224 08:19:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:57.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:22:57.224 08:19:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:22:57.224 08:19:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:57.224 08:19:30 -- common/autotest_common.sh@10 -- # set +x 00:22:57.224 08:19:30 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:22:57.224 [2024-04-17 08:19:30.379409] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:22:57.224 [2024-04-17 08:19:30.379496] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54317 ] 00:22:57.494 [2024-04-17 08:19:30.733610] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:57.494 [2024-04-17 08:19:30.823438] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:57.494 [2024-04-17 08:19:30.823636] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:58.062 08:19:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:58.062 08:19:31 -- common/autotest_common.sh@852 -- # return 0 00:22:58.062 00:22:58.062 08:19:31 -- json_config/json_config.sh@115 -- # echo '' 00:22:58.062 08:19:31 -- json_config/json_config.sh@322 -- # create_accel_config 00:22:58.062 08:19:31 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:22:58.062 08:19:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:58.062 08:19:31 -- common/autotest_common.sh@10 -- # set +x 00:22:58.062 08:19:31 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:22:58.062 08:19:31 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:22:58.062 08:19:31 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:58.062 08:19:31 -- common/autotest_common.sh@10 -- # set +x 00:22:58.062 08:19:31 -- json_config/json_config.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:22:58.062 08:19:31 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:22:58.062 08:19:31 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:22:58.632 08:19:31 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:22:58.632 08:19:31 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:22:58.632 08:19:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:58.632 08:19:31 -- common/autotest_common.sh@10 -- # set +x 00:22:58.632 08:19:31 -- json_config/json_config.sh@48 -- # local ret=0 00:22:58.632 08:19:31 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:22:58.632 08:19:31 -- json_config/json_config.sh@49 -- # local enabled_types 00:22:58.632 08:19:31 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:22:58.632 08:19:31 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:22:58.632 08:19:31 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:22:58.632 08:19:31 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:22:58.632 08:19:31 -- json_config/json_config.sh@51 -- # local get_types 00:22:58.632 08:19:31 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:22:58.632 08:19:31 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:22:58.632 08:19:31 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:58.632 08:19:31 -- common/autotest_common.sh@10 -- # set +x 00:22:58.892 08:19:31 -- json_config/json_config.sh@58 -- # return 0 00:22:58.892 08:19:31 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:22:58.892 08:19:31 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:22:58.892 08:19:31 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:22:58.892 08:19:31 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:22:58.892 08:19:31 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:22:58.892 08:19:31 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:22:58.892 08:19:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:58.892 08:19:31 -- common/autotest_common.sh@10 -- # set +x 00:22:58.892 08:19:31 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:22:58.892 08:19:31 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:22:58.892 08:19:31 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:22:58.892 08:19:31 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:22:58.892 08:19:31 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:22:58.892 MallocForNvmf0 00:22:58.892 08:19:32 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:22:58.892 08:19:32 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:22:59.152 MallocForNvmf1 00:22:59.152 08:19:32 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:22:59.152 08:19:32 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:22:59.411 [2024-04-17 08:19:32.615312] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:59.411 08:19:32 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:59.411 08:19:32 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:59.670 08:19:32 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:22:59.670 08:19:32 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:22:59.930 08:19:33 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:22:59.930 08:19:33 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:23:00.189 08:19:33 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:23:00.189 08:19:33 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:23:00.448 [2024-04-17 08:19:33.550076] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:23:00.448 08:19:33 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:23:00.448 08:19:33 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:00.448 08:19:33 -- common/autotest_common.sh@10 -- # set +x 00:23:00.448 08:19:33 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:23:00.448 08:19:33 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:00.448 08:19:33 -- common/autotest_common.sh@10 -- # set +x 00:23:00.448 08:19:33 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:23:00.448 08:19:33 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:23:00.448 08:19:33 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:23:00.707 MallocBdevForConfigChangeCheck 00:23:00.707 08:19:33 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:23:00.707 08:19:33 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:00.707 08:19:33 -- common/autotest_common.sh@10 -- # set +x 00:23:00.707 08:19:33 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:23:00.707 08:19:33 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:23:01.275 INFO: shutting down applications... 00:23:01.275 08:19:34 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:23:01.275 08:19:34 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:23:01.275 08:19:34 -- json_config/json_config.sh@431 -- # json_config_clear target 00:23:01.275 08:19:34 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:23:01.275 08:19:34 -- json_config/json_config.sh@386 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:23:01.535 Calling clear_iscsi_subsystem 00:23:01.535 Calling clear_nvmf_subsystem 00:23:01.535 Calling clear_nbd_subsystem 00:23:01.535 Calling clear_ublk_subsystem 00:23:01.535 Calling clear_vhost_blk_subsystem 00:23:01.535 Calling clear_vhost_scsi_subsystem 00:23:01.535 Calling clear_scheduler_subsystem 00:23:01.535 Calling clear_bdev_subsystem 00:23:01.535 Calling clear_accel_subsystem 00:23:01.535 Calling clear_vmd_subsystem 00:23:01.535 Calling clear_sock_subsystem 00:23:01.535 Calling clear_iobuf_subsystem 00:23:01.535 08:19:34 -- json_config/json_config.sh@390 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:23:01.535 08:19:34 -- json_config/json_config.sh@396 -- # count=100 00:23:01.535 08:19:34 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:23:01.535 08:19:34 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:23:01.535 08:19:34 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:23:01.535 08:19:34 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:23:01.794 08:19:35 -- json_config/json_config.sh@398 -- # break 00:23:01.794 08:19:35 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:23:01.794 08:19:35 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:23:01.794 08:19:35 -- json_config/json_config.sh@120 -- # local app=target 00:23:01.794 08:19:35 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:23:01.794 08:19:35 -- json_config/json_config.sh@124 -- # [[ -n 54317 ]] 00:23:01.794 08:19:35 -- json_config/json_config.sh@127 -- # kill -SIGINT 54317 00:23:01.794 08:19:35 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:23:01.794 08:19:35 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:23:01.794 08:19:35 -- json_config/json_config.sh@130 -- # kill -0 54317 00:23:01.794 08:19:35 -- json_config/json_config.sh@134 -- # sleep 0.5 00:23:02.363 08:19:35 -- json_config/json_config.sh@129 -- # (( i++ )) 00:23:02.363 08:19:35 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:23:02.363 08:19:35 -- json_config/json_config.sh@130 -- # kill -0 54317 00:23:02.363 08:19:35 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:23:02.363 08:19:35 -- json_config/json_config.sh@132 -- # break 00:23:02.363 08:19:35 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:23:02.363 SPDK target shutdown done 00:23:02.363 08:19:35 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:23:02.363 INFO: relaunching applications... 00:23:02.363 08:19:35 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:23:02.363 08:19:35 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:23:02.363 08:19:35 -- json_config/json_config.sh@98 -- # local app=target 00:23:02.363 08:19:35 -- json_config/json_config.sh@99 -- # shift 00:23:02.363 08:19:35 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:23:02.363 08:19:35 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:23:02.363 08:19:35 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:23:02.363 08:19:35 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:23:02.363 08:19:35 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:23:02.363 08:19:35 -- json_config/json_config.sh@111 -- # app_pid[$app]=54502 00:23:02.363 08:19:35 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:23:02.363 Waiting for target to run... 00:23:02.363 08:19:35 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:23:02.363 08:19:35 -- json_config/json_config.sh@114 -- # waitforlisten 54502 /var/tmp/spdk_tgt.sock 00:23:02.363 08:19:35 -- common/autotest_common.sh@819 -- # '[' -z 54502 ']' 00:23:02.363 08:19:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:23:02.363 08:19:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:02.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:23:02.363 08:19:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:23:02.363 08:19:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:02.363 08:19:35 -- common/autotest_common.sh@10 -- # set +x 00:23:02.363 [2024-04-17 08:19:35.603698] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:23:02.363 [2024-04-17 08:19:35.603790] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54502 ] 00:23:02.932 [2024-04-17 08:19:35.976738] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:02.932 [2024-04-17 08:19:36.063301] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:02.932 [2024-04-17 08:19:36.063470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:03.192 [2024-04-17 08:19:36.375469] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:03.192 [2024-04-17 08:19:36.407493] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:23:03.192 08:19:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:03.192 08:19:36 -- common/autotest_common.sh@852 -- # return 0 00:23:03.192 00:23:03.192 08:19:36 -- json_config/json_config.sh@115 -- # echo '' 00:23:03.192 08:19:36 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:23:03.192 INFO: Checking if target configuration is the same... 00:23:03.192 08:19:36 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:23:03.192 08:19:36 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:23:03.192 08:19:36 -- json_config/json_config.sh@441 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:23:03.192 08:19:36 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:23:03.192 + '[' 2 -ne 2 ']' 00:23:03.192 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:23:03.192 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:23:03.192 + rootdir=/home/vagrant/spdk_repo/spdk 00:23:03.450 +++ basename /dev/fd/62 00:23:03.450 ++ mktemp /tmp/62.XXX 00:23:03.450 + tmp_file_1=/tmp/62.JWh 00:23:03.450 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:23:03.450 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:23:03.450 + tmp_file_2=/tmp/spdk_tgt_config.json.NsC 00:23:03.450 + ret=0 00:23:03.450 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:23:03.708 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:23:03.708 + diff -u /tmp/62.JWh /tmp/spdk_tgt_config.json.NsC 00:23:03.708 INFO: JSON config files are the same 00:23:03.708 + echo 'INFO: JSON config files are the same' 00:23:03.708 + rm /tmp/62.JWh /tmp/spdk_tgt_config.json.NsC 00:23:03.708 + exit 0 00:23:03.708 08:19:36 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:23:03.708 08:19:36 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:23:03.708 INFO: changing configuration and checking if this can be detected... 00:23:03.708 08:19:36 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:23:03.708 08:19:36 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:23:03.967 08:19:37 -- json_config/json_config.sh@450 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:23:03.967 08:19:37 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:23:03.967 08:19:37 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:23:03.967 + '[' 2 -ne 2 ']' 00:23:03.967 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:23:03.967 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:23:03.967 + rootdir=/home/vagrant/spdk_repo/spdk 00:23:03.967 +++ basename /dev/fd/62 00:23:03.967 ++ mktemp /tmp/62.XXX 00:23:03.967 + tmp_file_1=/tmp/62.6T6 00:23:03.967 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:23:03.967 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:23:03.967 + tmp_file_2=/tmp/spdk_tgt_config.json.W32 00:23:03.967 + ret=0 00:23:03.967 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:23:04.225 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:23:04.484 + diff -u /tmp/62.6T6 /tmp/spdk_tgt_config.json.W32 00:23:04.484 + ret=1 00:23:04.484 + echo '=== Start of file: /tmp/62.6T6 ===' 00:23:04.484 + cat /tmp/62.6T6 00:23:04.484 + echo '=== End of file: /tmp/62.6T6 ===' 00:23:04.484 + echo '' 00:23:04.484 + echo '=== Start of file: /tmp/spdk_tgt_config.json.W32 ===' 00:23:04.484 + cat /tmp/spdk_tgt_config.json.W32 00:23:04.484 + echo '=== End of file: /tmp/spdk_tgt_config.json.W32 ===' 00:23:04.484 + echo '' 00:23:04.484 + rm /tmp/62.6T6 /tmp/spdk_tgt_config.json.W32 00:23:04.484 + exit 1 00:23:04.484 INFO: configuration change detected. 00:23:04.484 08:19:37 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:23:04.484 08:19:37 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:23:04.484 08:19:37 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:23:04.484 08:19:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:04.484 08:19:37 -- common/autotest_common.sh@10 -- # set +x 00:23:04.484 08:19:37 -- json_config/json_config.sh@360 -- # local ret=0 00:23:04.484 08:19:37 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:23:04.484 08:19:37 -- json_config/json_config.sh@370 -- # [[ -n 54502 ]] 00:23:04.484 08:19:37 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:23:04.484 08:19:37 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:23:04.484 08:19:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:04.484 08:19:37 -- common/autotest_common.sh@10 -- # set +x 00:23:04.484 08:19:37 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:23:04.484 08:19:37 -- json_config/json_config.sh@246 -- # uname -s 00:23:04.484 08:19:37 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:23:04.484 08:19:37 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:23:04.484 08:19:37 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:23:04.484 08:19:37 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:23:04.484 08:19:37 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:04.484 08:19:37 -- common/autotest_common.sh@10 -- # set +x 00:23:04.484 08:19:37 -- json_config/json_config.sh@376 -- # killprocess 54502 00:23:04.484 08:19:37 -- common/autotest_common.sh@926 -- # '[' -z 54502 ']' 00:23:04.484 08:19:37 -- common/autotest_common.sh@930 -- # kill -0 54502 00:23:04.484 08:19:37 -- common/autotest_common.sh@931 -- # uname 00:23:04.484 08:19:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:04.484 08:19:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 54502 00:23:04.484 killing process with pid 54502 00:23:04.484 08:19:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:04.484 08:19:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:04.485 08:19:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 54502' 00:23:04.485 08:19:37 -- common/autotest_common.sh@945 -- # kill 54502 00:23:04.485 08:19:37 -- common/autotest_common.sh@950 -- # wait 54502 00:23:04.743 08:19:37 -- json_config/json_config.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:23:04.743 08:19:37 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:23:04.743 08:19:37 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:04.743 08:19:37 -- common/autotest_common.sh@10 -- # set +x 00:23:04.743 08:19:38 -- json_config/json_config.sh@381 -- # return 0 00:23:04.743 08:19:38 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:23:04.743 INFO: Success 00:23:04.743 00:23:04.743 real 0m7.796s 00:23:04.743 user 0m10.936s 00:23:04.743 sys 0m1.701s 00:23:04.743 08:19:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:04.743 08:19:38 -- common/autotest_common.sh@10 -- # set +x 00:23:04.743 ************************************ 00:23:04.743 END TEST json_config 00:23:04.743 ************************************ 00:23:04.743 08:19:38 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:23:04.743 08:19:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:23:04.743 08:19:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:04.743 08:19:38 -- common/autotest_common.sh@10 -- # set +x 00:23:04.743 ************************************ 00:23:04.743 START TEST json_config_extra_key 00:23:04.743 ************************************ 00:23:04.743 08:19:38 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:23:05.002 08:19:38 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:05.002 08:19:38 -- nvmf/common.sh@7 -- # uname -s 00:23:05.002 08:19:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:05.002 08:19:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:05.002 08:19:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:05.002 08:19:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:05.002 08:19:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:05.002 08:19:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:05.002 08:19:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:05.002 08:19:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:05.002 08:19:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:05.002 08:19:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:05.002 08:19:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ce38300-f67f-48af-81f9-d51a7c54746d 00:23:05.002 08:19:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ce38300-f67f-48af-81f9-d51a7c54746d 00:23:05.002 08:19:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:05.002 08:19:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:05.002 08:19:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:23:05.002 08:19:38 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:05.002 08:19:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:05.002 08:19:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:05.002 08:19:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:05.002 08:19:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.002 08:19:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.002 08:19:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.002 08:19:38 -- paths/export.sh@5 -- # export PATH 00:23:05.002 08:19:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.002 08:19:38 -- nvmf/common.sh@46 -- # : 0 00:23:05.002 08:19:38 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:05.002 08:19:38 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:05.002 08:19:38 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:05.002 08:19:38 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:05.002 08:19:38 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:05.002 08:19:38 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:05.002 08:19:38 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:05.003 08:19:38 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:05.003 08:19:38 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:23:05.003 08:19:38 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:23:05.003 08:19:38 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:23:05.003 08:19:38 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:23:05.003 08:19:38 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:23:05.003 08:19:38 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:23:05.003 08:19:38 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:23:05.003 08:19:38 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:23:05.003 08:19:38 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:23:05.003 INFO: launching applications... 00:23:05.003 08:19:38 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:23:05.003 08:19:38 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:23:05.003 08:19:38 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:23:05.003 08:19:38 -- json_config/json_config_extra_key.sh@25 -- # shift 00:23:05.003 08:19:38 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:23:05.003 08:19:38 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:23:05.003 08:19:38 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=54647 00:23:05.003 08:19:38 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:23:05.003 Waiting for target to run... 00:23:05.003 08:19:38 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 54647 /var/tmp/spdk_tgt.sock 00:23:05.003 08:19:38 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:23:05.003 08:19:38 -- common/autotest_common.sh@819 -- # '[' -z 54647 ']' 00:23:05.003 08:19:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:23:05.003 08:19:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:05.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:23:05.003 08:19:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:23:05.003 08:19:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:05.003 08:19:38 -- common/autotest_common.sh@10 -- # set +x 00:23:05.003 [2024-04-17 08:19:38.240838] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:23:05.003 [2024-04-17 08:19:38.240940] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54647 ] 00:23:05.572 [2024-04-17 08:19:38.773585] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:05.572 [2024-04-17 08:19:38.870465] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:05.572 [2024-04-17 08:19:38.870644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:05.832 08:19:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:05.832 08:19:39 -- common/autotest_common.sh@852 -- # return 0 00:23:05.832 08:19:39 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:23:05.832 00:23:05.832 INFO: shutting down applications... 00:23:05.832 08:19:39 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:23:05.832 08:19:39 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:23:05.832 08:19:39 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:23:05.832 08:19:39 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:23:05.832 08:19:39 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 54647 ]] 00:23:05.832 08:19:39 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 54647 00:23:05.832 08:19:39 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:23:05.832 08:19:39 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:23:05.832 08:19:39 -- json_config/json_config_extra_key.sh@50 -- # kill -0 54647 00:23:05.832 08:19:39 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:23:06.492 08:19:39 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:23:06.492 08:19:39 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:23:06.492 08:19:39 -- json_config/json_config_extra_key.sh@50 -- # kill -0 54647 00:23:06.492 08:19:39 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:23:06.492 08:19:39 -- json_config/json_config_extra_key.sh@52 -- # break 00:23:06.492 08:19:39 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:23:06.492 SPDK target shutdown done 00:23:06.492 08:19:39 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:23:06.492 Success 00:23:06.492 08:19:39 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:23:06.492 00:23:06.492 real 0m1.570s 00:23:06.492 user 0m1.234s 00:23:06.492 sys 0m0.549s 00:23:06.492 08:19:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:06.492 08:19:39 -- common/autotest_common.sh@10 -- # set +x 00:23:06.492 ************************************ 00:23:06.492 END TEST json_config_extra_key 00:23:06.492 ************************************ 00:23:06.492 08:19:39 -- spdk/autotest.sh@180 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:23:06.492 08:19:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:23:06.492 08:19:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:06.492 08:19:39 -- common/autotest_common.sh@10 -- # set +x 00:23:06.492 ************************************ 00:23:06.492 START TEST alias_rpc 00:23:06.492 ************************************ 00:23:06.492 08:19:39 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:23:06.492 * Looking for test storage... 00:23:06.492 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:23:06.492 08:19:39 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:23:06.752 08:19:39 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=54711 00:23:06.752 08:19:39 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:06.752 08:19:39 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 54711 00:23:06.752 08:19:39 -- common/autotest_common.sh@819 -- # '[' -z 54711 ']' 00:23:06.752 08:19:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:06.752 08:19:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:06.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:06.752 08:19:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:06.752 08:19:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:06.752 08:19:39 -- common/autotest_common.sh@10 -- # set +x 00:23:06.752 [2024-04-17 08:19:39.884370] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:23:06.752 [2024-04-17 08:19:39.884463] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54711 ] 00:23:06.752 [2024-04-17 08:19:40.025127] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:07.013 [2024-04-17 08:19:40.131351] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:07.013 [2024-04-17 08:19:40.131505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:07.579 08:19:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:07.579 08:19:40 -- common/autotest_common.sh@852 -- # return 0 00:23:07.579 08:19:40 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:23:07.838 08:19:41 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 54711 00:23:07.838 08:19:41 -- common/autotest_common.sh@926 -- # '[' -z 54711 ']' 00:23:07.838 08:19:41 -- common/autotest_common.sh@930 -- # kill -0 54711 00:23:07.838 08:19:41 -- common/autotest_common.sh@931 -- # uname 00:23:07.838 08:19:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:07.838 08:19:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 54711 00:23:07.838 08:19:41 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:07.838 08:19:41 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:07.838 killing process with pid 54711 00:23:07.838 08:19:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 54711' 00:23:07.838 08:19:41 -- common/autotest_common.sh@945 -- # kill 54711 00:23:07.838 08:19:41 -- common/autotest_common.sh@950 -- # wait 54711 00:23:08.097 00:23:08.097 real 0m1.719s 00:23:08.097 user 0m1.889s 00:23:08.097 sys 0m0.420s 00:23:08.097 08:19:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:08.097 08:19:41 -- common/autotest_common.sh@10 -- # set +x 00:23:08.097 ************************************ 00:23:08.097 END TEST alias_rpc 00:23:08.097 ************************************ 00:23:08.355 08:19:41 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:23:08.355 08:19:41 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:23:08.355 08:19:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:23:08.355 08:19:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:08.355 08:19:41 -- common/autotest_common.sh@10 -- # set +x 00:23:08.355 ************************************ 00:23:08.355 START TEST spdkcli_tcp 00:23:08.355 ************************************ 00:23:08.355 08:19:41 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:23:08.355 * Looking for test storage... 00:23:08.355 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:23:08.355 08:19:41 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:23:08.355 08:19:41 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:23:08.355 08:19:41 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:23:08.355 08:19:41 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:23:08.355 08:19:41 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:23:08.355 08:19:41 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:08.355 08:19:41 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:23:08.355 08:19:41 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:08.355 08:19:41 -- common/autotest_common.sh@10 -- # set +x 00:23:08.355 08:19:41 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=54780 00:23:08.355 08:19:41 -- spdkcli/tcp.sh@27 -- # waitforlisten 54780 00:23:08.355 08:19:41 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:23:08.355 08:19:41 -- common/autotest_common.sh@819 -- # '[' -z 54780 ']' 00:23:08.355 08:19:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:08.355 08:19:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:08.355 08:19:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:08.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:08.355 08:19:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:08.355 08:19:41 -- common/autotest_common.sh@10 -- # set +x 00:23:08.614 [2024-04-17 08:19:41.691149] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:23:08.614 [2024-04-17 08:19:41.691230] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54780 ] 00:23:08.614 [2024-04-17 08:19:41.829343] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:08.614 [2024-04-17 08:19:41.935184] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:08.614 [2024-04-17 08:19:41.935575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:08.614 [2024-04-17 08:19:41.935583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:09.553 08:19:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:09.553 08:19:42 -- common/autotest_common.sh@852 -- # return 0 00:23:09.553 08:19:42 -- spdkcli/tcp.sh@31 -- # socat_pid=54797 00:23:09.553 08:19:42 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:23:09.553 08:19:42 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:23:09.553 [ 00:23:09.553 "bdev_malloc_delete", 00:23:09.553 "bdev_malloc_create", 00:23:09.553 "bdev_null_resize", 00:23:09.553 "bdev_null_delete", 00:23:09.553 "bdev_null_create", 00:23:09.553 "bdev_nvme_cuse_unregister", 00:23:09.553 "bdev_nvme_cuse_register", 00:23:09.553 "bdev_opal_new_user", 00:23:09.553 "bdev_opal_set_lock_state", 00:23:09.553 "bdev_opal_delete", 00:23:09.553 "bdev_opal_get_info", 00:23:09.553 "bdev_opal_create", 00:23:09.553 "bdev_nvme_opal_revert", 00:23:09.553 "bdev_nvme_opal_init", 00:23:09.553 "bdev_nvme_send_cmd", 00:23:09.553 "bdev_nvme_get_path_iostat", 00:23:09.553 "bdev_nvme_get_mdns_discovery_info", 00:23:09.553 "bdev_nvme_stop_mdns_discovery", 00:23:09.553 "bdev_nvme_start_mdns_discovery", 00:23:09.553 "bdev_nvme_set_multipath_policy", 00:23:09.553 "bdev_nvme_set_preferred_path", 00:23:09.553 "bdev_nvme_get_io_paths", 00:23:09.553 "bdev_nvme_remove_error_injection", 00:23:09.553 "bdev_nvme_add_error_injection", 00:23:09.553 "bdev_nvme_get_discovery_info", 00:23:09.553 "bdev_nvme_stop_discovery", 00:23:09.553 "bdev_nvme_start_discovery", 00:23:09.553 "bdev_nvme_get_controller_health_info", 00:23:09.553 "bdev_nvme_disable_controller", 00:23:09.553 "bdev_nvme_enable_controller", 00:23:09.553 "bdev_nvme_reset_controller", 00:23:09.553 "bdev_nvme_get_transport_statistics", 00:23:09.553 "bdev_nvme_apply_firmware", 00:23:09.553 "bdev_nvme_detach_controller", 00:23:09.553 "bdev_nvme_get_controllers", 00:23:09.553 "bdev_nvme_attach_controller", 00:23:09.553 "bdev_nvme_set_hotplug", 00:23:09.553 "bdev_nvme_set_options", 00:23:09.553 "bdev_passthru_delete", 00:23:09.553 "bdev_passthru_create", 00:23:09.553 "bdev_lvol_grow_lvstore", 00:23:09.553 "bdev_lvol_get_lvols", 00:23:09.553 "bdev_lvol_get_lvstores", 00:23:09.553 "bdev_lvol_delete", 00:23:09.553 "bdev_lvol_set_read_only", 00:23:09.553 "bdev_lvol_resize", 00:23:09.553 "bdev_lvol_decouple_parent", 00:23:09.553 "bdev_lvol_inflate", 00:23:09.553 "bdev_lvol_rename", 00:23:09.553 "bdev_lvol_clone_bdev", 00:23:09.553 "bdev_lvol_clone", 00:23:09.553 "bdev_lvol_snapshot", 00:23:09.553 "bdev_lvol_create", 00:23:09.553 "bdev_lvol_delete_lvstore", 00:23:09.553 "bdev_lvol_rename_lvstore", 00:23:09.553 "bdev_lvol_create_lvstore", 00:23:09.553 "bdev_raid_set_options", 00:23:09.553 "bdev_raid_remove_base_bdev", 00:23:09.553 "bdev_raid_add_base_bdev", 00:23:09.553 "bdev_raid_delete", 00:23:09.553 "bdev_raid_create", 00:23:09.553 "bdev_raid_get_bdevs", 00:23:09.553 "bdev_error_inject_error", 00:23:09.553 "bdev_error_delete", 00:23:09.553 "bdev_error_create", 00:23:09.553 "bdev_split_delete", 00:23:09.553 "bdev_split_create", 00:23:09.553 "bdev_delay_delete", 00:23:09.553 "bdev_delay_create", 00:23:09.553 "bdev_delay_update_latency", 00:23:09.553 "bdev_zone_block_delete", 00:23:09.553 "bdev_zone_block_create", 00:23:09.553 "blobfs_create", 00:23:09.553 "blobfs_detect", 00:23:09.553 "blobfs_set_cache_size", 00:23:09.553 "bdev_aio_delete", 00:23:09.553 "bdev_aio_rescan", 00:23:09.553 "bdev_aio_create", 00:23:09.553 "bdev_ftl_set_property", 00:23:09.553 "bdev_ftl_get_properties", 00:23:09.553 "bdev_ftl_get_stats", 00:23:09.553 "bdev_ftl_unmap", 00:23:09.553 "bdev_ftl_unload", 00:23:09.553 "bdev_ftl_delete", 00:23:09.553 "bdev_ftl_load", 00:23:09.553 "bdev_ftl_create", 00:23:09.553 "bdev_virtio_attach_controller", 00:23:09.553 "bdev_virtio_scsi_get_devices", 00:23:09.553 "bdev_virtio_detach_controller", 00:23:09.553 "bdev_virtio_blk_set_hotplug", 00:23:09.553 "bdev_iscsi_delete", 00:23:09.553 "bdev_iscsi_create", 00:23:09.553 "bdev_iscsi_set_options", 00:23:09.553 "bdev_uring_delete", 00:23:09.553 "bdev_uring_create", 00:23:09.553 "accel_error_inject_error", 00:23:09.553 "ioat_scan_accel_module", 00:23:09.553 "dsa_scan_accel_module", 00:23:09.553 "iaa_scan_accel_module", 00:23:09.553 "vfu_virtio_create_scsi_endpoint", 00:23:09.553 "vfu_virtio_scsi_remove_target", 00:23:09.553 "vfu_virtio_scsi_add_target", 00:23:09.553 "vfu_virtio_create_blk_endpoint", 00:23:09.553 "vfu_virtio_delete_endpoint", 00:23:09.553 "iscsi_set_options", 00:23:09.553 "iscsi_get_auth_groups", 00:23:09.553 "iscsi_auth_group_remove_secret", 00:23:09.553 "iscsi_auth_group_add_secret", 00:23:09.553 "iscsi_delete_auth_group", 00:23:09.553 "iscsi_create_auth_group", 00:23:09.553 "iscsi_set_discovery_auth", 00:23:09.553 "iscsi_get_options", 00:23:09.553 "iscsi_target_node_request_logout", 00:23:09.553 "iscsi_target_node_set_redirect", 00:23:09.553 "iscsi_target_node_set_auth", 00:23:09.553 "iscsi_target_node_add_lun", 00:23:09.553 "iscsi_get_connections", 00:23:09.554 "iscsi_portal_group_set_auth", 00:23:09.554 "iscsi_start_portal_group", 00:23:09.554 "iscsi_delete_portal_group", 00:23:09.554 "iscsi_create_portal_group", 00:23:09.554 "iscsi_get_portal_groups", 00:23:09.554 "iscsi_delete_target_node", 00:23:09.554 "iscsi_target_node_remove_pg_ig_maps", 00:23:09.554 "iscsi_target_node_add_pg_ig_maps", 00:23:09.554 "iscsi_create_target_node", 00:23:09.554 "iscsi_get_target_nodes", 00:23:09.554 "iscsi_delete_initiator_group", 00:23:09.554 "iscsi_initiator_group_remove_initiators", 00:23:09.554 "iscsi_initiator_group_add_initiators", 00:23:09.554 "iscsi_create_initiator_group", 00:23:09.554 "iscsi_get_initiator_groups", 00:23:09.554 "nvmf_set_crdt", 00:23:09.554 "nvmf_set_config", 00:23:09.554 "nvmf_set_max_subsystems", 00:23:09.554 "nvmf_subsystem_get_listeners", 00:23:09.554 "nvmf_subsystem_get_qpairs", 00:23:09.554 "nvmf_subsystem_get_controllers", 00:23:09.554 "nvmf_get_stats", 00:23:09.554 "nvmf_get_transports", 00:23:09.554 "nvmf_create_transport", 00:23:09.554 "nvmf_get_targets", 00:23:09.554 "nvmf_delete_target", 00:23:09.554 "nvmf_create_target", 00:23:09.554 "nvmf_subsystem_allow_any_host", 00:23:09.554 "nvmf_subsystem_remove_host", 00:23:09.554 "nvmf_subsystem_add_host", 00:23:09.554 "nvmf_subsystem_remove_ns", 00:23:09.554 "nvmf_subsystem_add_ns", 00:23:09.554 "nvmf_subsystem_listener_set_ana_state", 00:23:09.554 "nvmf_discovery_get_referrals", 00:23:09.554 "nvmf_discovery_remove_referral", 00:23:09.554 "nvmf_discovery_add_referral", 00:23:09.554 "nvmf_subsystem_remove_listener", 00:23:09.554 "nvmf_subsystem_add_listener", 00:23:09.554 "nvmf_delete_subsystem", 00:23:09.554 "nvmf_create_subsystem", 00:23:09.554 "nvmf_get_subsystems", 00:23:09.554 "env_dpdk_get_mem_stats", 00:23:09.554 "nbd_get_disks", 00:23:09.554 "nbd_stop_disk", 00:23:09.554 "nbd_start_disk", 00:23:09.554 "ublk_recover_disk", 00:23:09.554 "ublk_get_disks", 00:23:09.554 "ublk_stop_disk", 00:23:09.554 "ublk_start_disk", 00:23:09.554 "ublk_destroy_target", 00:23:09.554 "ublk_create_target", 00:23:09.554 "virtio_blk_create_transport", 00:23:09.554 "virtio_blk_get_transports", 00:23:09.554 "vhost_controller_set_coalescing", 00:23:09.554 "vhost_get_controllers", 00:23:09.554 "vhost_delete_controller", 00:23:09.554 "vhost_create_blk_controller", 00:23:09.554 "vhost_scsi_controller_remove_target", 00:23:09.554 "vhost_scsi_controller_add_target", 00:23:09.554 "vhost_start_scsi_controller", 00:23:09.554 "vhost_create_scsi_controller", 00:23:09.554 "thread_set_cpumask", 00:23:09.554 "framework_get_scheduler", 00:23:09.554 "framework_set_scheduler", 00:23:09.554 "framework_get_reactors", 00:23:09.554 "thread_get_io_channels", 00:23:09.554 "thread_get_pollers", 00:23:09.554 "thread_get_stats", 00:23:09.554 "framework_monitor_context_switch", 00:23:09.554 "spdk_kill_instance", 00:23:09.554 "log_enable_timestamps", 00:23:09.554 "log_get_flags", 00:23:09.554 "log_clear_flag", 00:23:09.554 "log_set_flag", 00:23:09.554 "log_get_level", 00:23:09.554 "log_set_level", 00:23:09.554 "log_get_print_level", 00:23:09.554 "log_set_print_level", 00:23:09.554 "framework_enable_cpumask_locks", 00:23:09.554 "framework_disable_cpumask_locks", 00:23:09.554 "framework_wait_init", 00:23:09.554 "framework_start_init", 00:23:09.554 "scsi_get_devices", 00:23:09.554 "bdev_get_histogram", 00:23:09.554 "bdev_enable_histogram", 00:23:09.554 "bdev_set_qos_limit", 00:23:09.554 "bdev_set_qd_sampling_period", 00:23:09.554 "bdev_get_bdevs", 00:23:09.554 "bdev_reset_iostat", 00:23:09.554 "bdev_get_iostat", 00:23:09.554 "bdev_examine", 00:23:09.554 "bdev_wait_for_examine", 00:23:09.554 "bdev_set_options", 00:23:09.554 "notify_get_notifications", 00:23:09.554 "notify_get_types", 00:23:09.554 "accel_get_stats", 00:23:09.554 "accel_set_options", 00:23:09.554 "accel_set_driver", 00:23:09.554 "accel_crypto_key_destroy", 00:23:09.554 "accel_crypto_keys_get", 00:23:09.554 "accel_crypto_key_create", 00:23:09.554 "accel_assign_opc", 00:23:09.554 "accel_get_module_info", 00:23:09.554 "accel_get_opc_assignments", 00:23:09.554 "vmd_rescan", 00:23:09.554 "vmd_remove_device", 00:23:09.554 "vmd_enable", 00:23:09.554 "sock_set_default_impl", 00:23:09.554 "sock_impl_set_options", 00:23:09.554 "sock_impl_get_options", 00:23:09.554 "iobuf_get_stats", 00:23:09.554 "iobuf_set_options", 00:23:09.554 "framework_get_pci_devices", 00:23:09.554 "framework_get_config", 00:23:09.554 "framework_get_subsystems", 00:23:09.554 "vfu_tgt_set_base_path", 00:23:09.554 "trace_get_info", 00:23:09.554 "trace_get_tpoint_group_mask", 00:23:09.554 "trace_disable_tpoint_group", 00:23:09.554 "trace_enable_tpoint_group", 00:23:09.554 "trace_clear_tpoint_mask", 00:23:09.554 "trace_set_tpoint_mask", 00:23:09.554 "spdk_get_version", 00:23:09.554 "rpc_get_methods" 00:23:09.554 ] 00:23:09.554 08:19:42 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:23:09.554 08:19:42 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:09.554 08:19:42 -- common/autotest_common.sh@10 -- # set +x 00:23:09.554 08:19:42 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:23:09.554 08:19:42 -- spdkcli/tcp.sh@38 -- # killprocess 54780 00:23:09.554 08:19:42 -- common/autotest_common.sh@926 -- # '[' -z 54780 ']' 00:23:09.554 08:19:42 -- common/autotest_common.sh@930 -- # kill -0 54780 00:23:09.554 08:19:42 -- common/autotest_common.sh@931 -- # uname 00:23:09.554 08:19:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:09.554 08:19:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 54780 00:23:09.554 killing process with pid 54780 00:23:09.554 08:19:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:09.554 08:19:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:09.554 08:19:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 54780' 00:23:09.554 08:19:42 -- common/autotest_common.sh@945 -- # kill 54780 00:23:09.554 08:19:42 -- common/autotest_common.sh@950 -- # wait 54780 00:23:10.125 ************************************ 00:23:10.125 END TEST spdkcli_tcp 00:23:10.125 ************************************ 00:23:10.125 00:23:10.125 real 0m1.752s 00:23:10.125 user 0m3.111s 00:23:10.125 sys 0m0.456s 00:23:10.125 08:19:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:10.125 08:19:43 -- common/autotest_common.sh@10 -- # set +x 00:23:10.125 08:19:43 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:23:10.125 08:19:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:23:10.125 08:19:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:10.125 08:19:43 -- common/autotest_common.sh@10 -- # set +x 00:23:10.125 ************************************ 00:23:10.125 START TEST dpdk_mem_utility 00:23:10.125 ************************************ 00:23:10.125 08:19:43 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:23:10.125 * Looking for test storage... 00:23:10.125 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:23:10.125 08:19:43 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:23:10.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:10.125 08:19:43 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=54870 00:23:10.125 08:19:43 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:10.125 08:19:43 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 54870 00:23:10.125 08:19:43 -- common/autotest_common.sh@819 -- # '[' -z 54870 ']' 00:23:10.125 08:19:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:10.125 08:19:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:10.125 08:19:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:10.125 08:19:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:10.125 08:19:43 -- common/autotest_common.sh@10 -- # set +x 00:23:10.385 [2024-04-17 08:19:43.495982] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:23:10.385 [2024-04-17 08:19:43.496075] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54870 ] 00:23:10.385 [2024-04-17 08:19:43.633545] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:10.645 [2024-04-17 08:19:43.741107] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:10.645 [2024-04-17 08:19:43.741362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:11.274 08:19:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:11.274 08:19:44 -- common/autotest_common.sh@852 -- # return 0 00:23:11.274 08:19:44 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:23:11.274 08:19:44 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:23:11.274 08:19:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:11.274 08:19:44 -- common/autotest_common.sh@10 -- # set +x 00:23:11.274 { 00:23:11.274 "filename": "/tmp/spdk_mem_dump.txt" 00:23:11.274 } 00:23:11.274 08:19:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:11.274 08:19:44 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:23:11.274 DPDK memory size 814.000000 MiB in 1 heap(s) 00:23:11.274 1 heaps totaling size 814.000000 MiB 00:23:11.274 size: 814.000000 MiB heap id: 0 00:23:11.274 end heaps---------- 00:23:11.274 8 mempools totaling size 598.116089 MiB 00:23:11.274 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:23:11.274 size: 158.602051 MiB name: PDU_data_out_Pool 00:23:11.274 size: 84.521057 MiB name: bdev_io_54870 00:23:11.274 size: 51.011292 MiB name: evtpool_54870 00:23:11.274 size: 50.003479 MiB name: msgpool_54870 00:23:11.274 size: 21.763794 MiB name: PDU_Pool 00:23:11.274 size: 19.513306 MiB name: SCSI_TASK_Pool 00:23:11.274 size: 0.026123 MiB name: Session_Pool 00:23:11.274 end mempools------- 00:23:11.274 6 memzones totaling size 4.142822 MiB 00:23:11.274 size: 1.000366 MiB name: RG_ring_0_54870 00:23:11.274 size: 1.000366 MiB name: RG_ring_1_54870 00:23:11.274 size: 1.000366 MiB name: RG_ring_4_54870 00:23:11.274 size: 1.000366 MiB name: RG_ring_5_54870 00:23:11.274 size: 0.125366 MiB name: RG_ring_2_54870 00:23:11.274 size: 0.015991 MiB name: RG_ring_3_54870 00:23:11.274 end memzones------- 00:23:11.274 08:19:44 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:23:11.274 heap id: 0 total size: 814.000000 MiB number of busy elements: 310 number of free elements: 15 00:23:11.274 list of free elements. size: 12.470093 MiB 00:23:11.274 element at address: 0x200000400000 with size: 1.999512 MiB 00:23:11.274 element at address: 0x200018e00000 with size: 0.999878 MiB 00:23:11.274 element at address: 0x200019000000 with size: 0.999878 MiB 00:23:11.274 element at address: 0x200003e00000 with size: 0.996277 MiB 00:23:11.274 element at address: 0x200031c00000 with size: 0.994446 MiB 00:23:11.274 element at address: 0x200013800000 with size: 0.978699 MiB 00:23:11.274 element at address: 0x200007000000 with size: 0.959839 MiB 00:23:11.274 element at address: 0x200019200000 with size: 0.936584 MiB 00:23:11.274 element at address: 0x200000200000 with size: 0.832825 MiB 00:23:11.274 element at address: 0x20001aa00000 with size: 0.567871 MiB 00:23:11.274 element at address: 0x20000b200000 with size: 0.488892 MiB 00:23:11.274 element at address: 0x200000800000 with size: 0.486145 MiB 00:23:11.274 element at address: 0x200019400000 with size: 0.485657 MiB 00:23:11.274 element at address: 0x200027e00000 with size: 0.395752 MiB 00:23:11.274 element at address: 0x200003a00000 with size: 0.347839 MiB 00:23:11.274 list of standard malloc elements. size: 199.267334 MiB 00:23:11.274 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:23:11.274 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:23:11.274 element at address: 0x200018efff80 with size: 1.000122 MiB 00:23:11.274 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:23:11.274 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:23:11.274 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:23:11.274 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:23:11.274 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:23:11.274 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:23:11.274 element at address: 0x2000002d5340 with size: 0.000183 MiB 00:23:11.274 element at address: 0x2000002d5400 with size: 0.000183 MiB 00:23:11.274 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:23:11.274 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:23:11.274 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:23:11.274 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:23:11.274 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:23:11.274 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:23:11.274 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:23:11.274 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:23:11.274 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:23:11.274 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:23:11.274 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:23:11.274 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:23:11.274 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:23:11.274 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:23:11.274 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:23:11.274 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:23:11.274 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:23:11.274 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:23:11.274 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:23:11.274 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:23:11.274 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:23:11.274 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:23:11.274 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:23:11.274 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:23:11.274 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:23:11.274 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:23:11.274 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:23:11.274 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:23:11.274 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:23:11.274 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:23:11.274 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:23:11.274 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:23:11.274 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:23:11.274 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:23:11.274 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:23:11.274 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:23:11.274 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:23:11.274 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:23:11.274 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:23:11.274 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:23:11.274 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:23:11.274 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:23:11.274 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:23:11.274 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:23:11.274 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:23:11.274 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:23:11.274 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:23:11.274 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:23:11.274 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:23:11.274 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:23:11.274 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:23:11.274 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:23:11.274 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:23:11.274 element at address: 0x20000087c740 with size: 0.000183 MiB 00:23:11.274 element at address: 0x20000087c800 with size: 0.000183 MiB 00:23:11.274 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:23:11.274 element at address: 0x20000087c980 with size: 0.000183 MiB 00:23:11.274 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:23:11.274 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:23:11.274 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:23:11.274 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:23:11.274 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:23:11.274 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:23:11.274 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:23:11.274 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:23:11.274 element at address: 0x200003a590c0 with size: 0.000183 MiB 00:23:11.274 element at address: 0x200003a59180 with size: 0.000183 MiB 00:23:11.274 element at address: 0x200003a59240 with size: 0.000183 MiB 00:23:11.274 element at address: 0x200003a59300 with size: 0.000183 MiB 00:23:11.274 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:23:11.274 element at address: 0x200003a59480 with size: 0.000183 MiB 00:23:11.274 element at address: 0x200003a59540 with size: 0.000183 MiB 00:23:11.274 element at address: 0x200003a59600 with size: 0.000183 MiB 00:23:11.274 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:23:11.274 element at address: 0x200003a59780 with size: 0.000183 MiB 00:23:11.274 element at address: 0x200003a59840 with size: 0.000183 MiB 00:23:11.274 element at address: 0x200003a59900 with size: 0.000183 MiB 00:23:11.274 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:23:11.274 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:23:11.274 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:23:11.274 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:23:11.274 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:23:11.274 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:23:11.274 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:23:11.274 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:23:11.274 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:23:11.274 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:23:11.274 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:23:11.274 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:23:11.274 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:23:11.274 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:23:11.274 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:23:11.274 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:23:11.274 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:23:11.274 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:23:11.274 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200003adb300 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200003adb500 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200003affa80 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200003affb40 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:23:11.275 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20000b27d280 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20000b27d340 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:23:11.275 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:23:11.275 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:23:11.275 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:23:11.275 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20001aa91600 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20001aa916c0 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20001aa91780 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20001aa91840 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20001aa91900 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:23:11.275 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200027e65500 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200027e6c1c0 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200027e6c3c0 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:23:11.275 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:23:11.275 list of memzone associated elements. size: 602.262573 MiB 00:23:11.275 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:23:11.275 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:23:11.275 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:23:11.275 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:23:11.275 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:23:11.275 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_54870_0 00:23:11.275 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:23:11.275 associated memzone info: size: 48.002930 MiB name: MP_evtpool_54870_0 00:23:11.275 element at address: 0x200003fff380 with size: 48.003052 MiB 00:23:11.275 associated memzone info: size: 48.002930 MiB name: MP_msgpool_54870_0 00:23:11.275 element at address: 0x2000195be940 with size: 20.255554 MiB 00:23:11.275 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:23:11.275 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:23:11.275 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:23:11.275 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:23:11.275 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_54870 00:23:11.275 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:23:11.275 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_54870 00:23:11.275 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:23:11.275 associated memzone info: size: 1.007996 MiB name: MP_evtpool_54870 00:23:11.275 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:23:11.275 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:23:11.275 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:23:11.275 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:23:11.275 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:23:11.275 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:23:11.275 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:23:11.275 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:23:11.275 element at address: 0x200003eff180 with size: 1.000488 MiB 00:23:11.275 associated memzone info: size: 1.000366 MiB name: RG_ring_0_54870 00:23:11.275 element at address: 0x200003affc00 with size: 1.000488 MiB 00:23:11.275 associated memzone info: size: 1.000366 MiB name: RG_ring_1_54870 00:23:11.275 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:23:11.275 associated memzone info: size: 1.000366 MiB name: RG_ring_4_54870 00:23:11.275 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:23:11.275 associated memzone info: size: 1.000366 MiB name: RG_ring_5_54870 00:23:11.275 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:23:11.275 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_54870 00:23:11.275 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:23:11.275 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:23:11.275 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:23:11.275 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:23:11.275 element at address: 0x20001947c540 with size: 0.250488 MiB 00:23:11.275 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:23:11.275 element at address: 0x200003adf880 with size: 0.125488 MiB 00:23:11.275 associated memzone info: size: 0.125366 MiB name: RG_ring_2_54870 00:23:11.275 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:23:11.275 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:23:11.275 element at address: 0x200027e65680 with size: 0.023743 MiB 00:23:11.275 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:23:11.275 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:23:11.275 associated memzone info: size: 0.015991 MiB name: RG_ring_3_54870 00:23:11.275 element at address: 0x200027e6b7c0 with size: 0.002441 MiB 00:23:11.275 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:23:11.275 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:23:11.275 associated memzone info: size: 0.000183 MiB name: MP_msgpool_54870 00:23:11.275 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:23:11.275 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_54870 00:23:11.275 element at address: 0x200027e6c280 with size: 0.000305 MiB 00:23:11.275 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:23:11.275 08:19:44 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:23:11.275 08:19:44 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 54870 00:23:11.275 08:19:44 -- common/autotest_common.sh@926 -- # '[' -z 54870 ']' 00:23:11.275 08:19:44 -- common/autotest_common.sh@930 -- # kill -0 54870 00:23:11.275 08:19:44 -- common/autotest_common.sh@931 -- # uname 00:23:11.275 08:19:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:11.275 08:19:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 54870 00:23:11.535 08:19:44 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:11.535 08:19:44 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:11.535 08:19:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 54870' 00:23:11.535 killing process with pid 54870 00:23:11.535 08:19:44 -- common/autotest_common.sh@945 -- # kill 54870 00:23:11.535 08:19:44 -- common/autotest_common.sh@950 -- # wait 54870 00:23:11.795 00:23:11.795 real 0m1.654s 00:23:11.795 user 0m1.790s 00:23:11.795 sys 0m0.407s 00:23:11.795 08:19:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:11.795 08:19:44 -- common/autotest_common.sh@10 -- # set +x 00:23:11.795 ************************************ 00:23:11.795 END TEST dpdk_mem_utility 00:23:11.795 ************************************ 00:23:11.795 08:19:45 -- spdk/autotest.sh@187 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:23:11.795 08:19:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:23:11.795 08:19:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:11.795 08:19:45 -- common/autotest_common.sh@10 -- # set +x 00:23:11.795 ************************************ 00:23:11.795 START TEST event 00:23:11.795 ************************************ 00:23:11.795 08:19:45 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:23:12.055 * Looking for test storage... 00:23:12.055 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:23:12.055 08:19:45 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:23:12.055 08:19:45 -- bdev/nbd_common.sh@6 -- # set -e 00:23:12.055 08:19:45 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:23:12.055 08:19:45 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:23:12.055 08:19:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:12.055 08:19:45 -- common/autotest_common.sh@10 -- # set +x 00:23:12.055 ************************************ 00:23:12.055 START TEST event_perf 00:23:12.055 ************************************ 00:23:12.055 08:19:45 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:23:12.055 Running I/O for 1 seconds...[2024-04-17 08:19:45.210592] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:23:12.055 [2024-04-17 08:19:45.210673] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54941 ] 00:23:12.055 [2024-04-17 08:19:45.355732] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:12.314 [2024-04-17 08:19:45.464390] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:12.314 [2024-04-17 08:19:45.464478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:12.314 [2024-04-17 08:19:45.464587] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:12.314 [2024-04-17 08:19:45.464591] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:13.273 Running I/O for 1 seconds... 00:23:13.273 lcore 0: 93310 00:23:13.273 lcore 1: 93308 00:23:13.273 lcore 2: 93310 00:23:13.273 lcore 3: 93311 00:23:13.273 done. 00:23:13.273 00:23:13.273 real 0m1.389s 00:23:13.273 user 0m4.193s 00:23:13.273 sys 0m0.069s 00:23:13.273 08:19:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:13.273 08:19:46 -- common/autotest_common.sh@10 -- # set +x 00:23:13.273 ************************************ 00:23:13.273 END TEST event_perf 00:23:13.273 ************************************ 00:23:13.531 08:19:46 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:23:13.531 08:19:46 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:23:13.531 08:19:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:13.531 08:19:46 -- common/autotest_common.sh@10 -- # set +x 00:23:13.531 ************************************ 00:23:13.531 START TEST event_reactor 00:23:13.531 ************************************ 00:23:13.531 08:19:46 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:23:13.531 [2024-04-17 08:19:46.655103] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:23:13.531 [2024-04-17 08:19:46.655233] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54979 ] 00:23:13.531 [2024-04-17 08:19:46.782579] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:13.789 [2024-04-17 08:19:46.909555] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:14.726 test_start 00:23:14.726 oneshot 00:23:14.726 tick 100 00:23:14.726 tick 100 00:23:14.726 tick 250 00:23:14.726 tick 100 00:23:14.726 tick 100 00:23:14.726 tick 250 00:23:14.726 tick 500 00:23:14.726 tick 100 00:23:14.726 tick 100 00:23:14.726 tick 100 00:23:14.726 tick 250 00:23:14.726 tick 100 00:23:14.726 tick 100 00:23:14.726 test_end 00:23:14.726 00:23:14.726 real 0m1.384s 00:23:14.726 user 0m1.218s 00:23:14.726 sys 0m0.059s 00:23:14.726 08:19:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:14.726 ************************************ 00:23:14.726 END TEST event_reactor 00:23:14.726 ************************************ 00:23:14.726 08:19:48 -- common/autotest_common.sh@10 -- # set +x 00:23:14.985 08:19:48 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:23:14.985 08:19:48 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:23:14.985 08:19:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:14.985 08:19:48 -- common/autotest_common.sh@10 -- # set +x 00:23:14.985 ************************************ 00:23:14.985 START TEST event_reactor_perf 00:23:14.985 ************************************ 00:23:14.985 08:19:48 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:23:14.985 [2024-04-17 08:19:48.108060] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:23:14.985 [2024-04-17 08:19:48.108286] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55015 ] 00:23:14.985 [2024-04-17 08:19:48.251476] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:15.244 [2024-04-17 08:19:48.355033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:16.246 test_start 00:23:16.246 test_end 00:23:16.246 Performance: 379713 events per second 00:23:16.246 00:23:16.246 real 0m1.380s 00:23:16.246 user 0m1.218s 00:23:16.246 sys 0m0.053s 00:23:16.246 08:19:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:16.246 08:19:49 -- common/autotest_common.sh@10 -- # set +x 00:23:16.246 ************************************ 00:23:16.246 END TEST event_reactor_perf 00:23:16.246 ************************************ 00:23:16.246 08:19:49 -- event/event.sh@49 -- # uname -s 00:23:16.246 08:19:49 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:23:16.246 08:19:49 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:23:16.246 08:19:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:23:16.246 08:19:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:16.246 08:19:49 -- common/autotest_common.sh@10 -- # set +x 00:23:16.246 ************************************ 00:23:16.246 START TEST event_scheduler 00:23:16.246 ************************************ 00:23:16.246 08:19:49 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:23:16.505 * Looking for test storage... 00:23:16.505 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:23:16.505 08:19:49 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:23:16.505 08:19:49 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:23:16.505 08:19:49 -- scheduler/scheduler.sh@35 -- # scheduler_pid=55074 00:23:16.505 08:19:49 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:23:16.505 08:19:49 -- scheduler/scheduler.sh@37 -- # waitforlisten 55074 00:23:16.505 08:19:49 -- common/autotest_common.sh@819 -- # '[' -z 55074 ']' 00:23:16.505 08:19:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:16.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:16.505 08:19:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:16.505 08:19:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:16.505 08:19:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:16.505 08:19:49 -- common/autotest_common.sh@10 -- # set +x 00:23:16.505 [2024-04-17 08:19:49.685375] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:23:16.505 [2024-04-17 08:19:49.685526] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55074 ] 00:23:16.505 [2024-04-17 08:19:49.826550] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:16.764 [2024-04-17 08:19:49.934162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:16.764 [2024-04-17 08:19:49.934341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:16.764 [2024-04-17 08:19:49.934445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:16.764 [2024-04-17 08:19:49.934453] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:17.332 08:19:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:17.332 08:19:50 -- common/autotest_common.sh@852 -- # return 0 00:23:17.332 08:19:50 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:23:17.332 08:19:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:17.332 08:19:50 -- common/autotest_common.sh@10 -- # set +x 00:23:17.332 POWER: Env isn't set yet! 00:23:17.332 POWER: Attempting to initialise ACPI cpufreq power management... 00:23:17.332 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:23:17.332 POWER: Cannot set governor of lcore 0 to userspace 00:23:17.332 POWER: Attempting to initialise PSTAT power management... 00:23:17.332 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:23:17.332 POWER: Cannot set governor of lcore 0 to performance 00:23:17.332 POWER: Attempting to initialise AMD PSTATE power management... 00:23:17.332 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:23:17.332 POWER: Cannot set governor of lcore 0 to userspace 00:23:17.332 POWER: Attempting to initialise CPPC power management... 00:23:17.332 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:23:17.332 POWER: Cannot set governor of lcore 0 to userspace 00:23:17.332 POWER: Attempting to initialise VM power management... 00:23:17.332 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:23:17.332 POWER: Unable to set Power Management Environment for lcore 0 00:23:17.332 [2024-04-17 08:19:50.639364] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:23:17.332 [2024-04-17 08:19:50.639404] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:23:17.332 [2024-04-17 08:19:50.639436] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:23:17.332 08:19:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:17.332 08:19:50 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:23:17.332 08:19:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:17.332 08:19:50 -- common/autotest_common.sh@10 -- # set +x 00:23:17.591 [2024-04-17 08:19:50.718854] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:23:17.591 08:19:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:17.591 08:19:50 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:23:17.591 08:19:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:23:17.591 08:19:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:17.591 08:19:50 -- common/autotest_common.sh@10 -- # set +x 00:23:17.591 ************************************ 00:23:17.591 START TEST scheduler_create_thread 00:23:17.591 ************************************ 00:23:17.591 08:19:50 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:23:17.591 08:19:50 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:23:17.591 08:19:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:17.591 08:19:50 -- common/autotest_common.sh@10 -- # set +x 00:23:17.591 2 00:23:17.591 08:19:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:17.591 08:19:50 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:23:17.591 08:19:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:17.591 08:19:50 -- common/autotest_common.sh@10 -- # set +x 00:23:17.591 3 00:23:17.591 08:19:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:17.591 08:19:50 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:23:17.591 08:19:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:17.591 08:19:50 -- common/autotest_common.sh@10 -- # set +x 00:23:17.591 4 00:23:17.591 08:19:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:17.591 08:19:50 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:23:17.591 08:19:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:17.591 08:19:50 -- common/autotest_common.sh@10 -- # set +x 00:23:17.591 5 00:23:17.591 08:19:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:17.591 08:19:50 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:23:17.591 08:19:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:17.591 08:19:50 -- common/autotest_common.sh@10 -- # set +x 00:23:17.591 6 00:23:17.591 08:19:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:17.591 08:19:50 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:23:17.591 08:19:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:17.591 08:19:50 -- common/autotest_common.sh@10 -- # set +x 00:23:17.591 7 00:23:17.591 08:19:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:17.591 08:19:50 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:23:17.591 08:19:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:17.591 08:19:50 -- common/autotest_common.sh@10 -- # set +x 00:23:17.591 8 00:23:17.591 08:19:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:17.591 08:19:50 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:23:17.591 08:19:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:17.591 08:19:50 -- common/autotest_common.sh@10 -- # set +x 00:23:17.591 9 00:23:17.591 08:19:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:17.591 08:19:50 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:23:17.591 08:19:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:17.591 08:19:50 -- common/autotest_common.sh@10 -- # set +x 00:23:18.160 10 00:23:18.160 08:19:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:18.160 08:19:51 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:23:18.160 08:19:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:18.160 08:19:51 -- common/autotest_common.sh@10 -- # set +x 00:23:19.538 08:19:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:19.538 08:19:52 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:23:19.538 08:19:52 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:23:19.538 08:19:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:19.538 08:19:52 -- common/autotest_common.sh@10 -- # set +x 00:23:20.106 08:19:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:20.106 08:19:53 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:23:20.106 08:19:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:20.106 08:19:53 -- common/autotest_common.sh@10 -- # set +x 00:23:21.064 08:19:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:21.064 08:19:54 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:23:21.064 08:19:54 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:23:21.064 08:19:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:21.064 08:19:54 -- common/autotest_common.sh@10 -- # set +x 00:23:21.631 ************************************ 00:23:21.631 END TEST scheduler_create_thread 00:23:21.631 ************************************ 00:23:21.631 08:19:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:21.631 00:23:21.631 real 0m4.211s 00:23:21.631 user 0m0.027s 00:23:21.631 sys 0m0.004s 00:23:21.631 08:19:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:21.631 08:19:54 -- common/autotest_common.sh@10 -- # set +x 00:23:21.890 08:19:54 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:23:21.890 08:19:54 -- scheduler/scheduler.sh@46 -- # killprocess 55074 00:23:21.890 08:19:54 -- common/autotest_common.sh@926 -- # '[' -z 55074 ']' 00:23:21.890 08:19:54 -- common/autotest_common.sh@930 -- # kill -0 55074 00:23:21.890 08:19:54 -- common/autotest_common.sh@931 -- # uname 00:23:21.890 08:19:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:21.890 08:19:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 55074 00:23:21.890 killing process with pid 55074 00:23:21.890 08:19:55 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:23:21.890 08:19:55 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:23:21.890 08:19:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 55074' 00:23:21.890 08:19:55 -- common/autotest_common.sh@945 -- # kill 55074 00:23:21.890 08:19:55 -- common/autotest_common.sh@950 -- # wait 55074 00:23:22.148 [2024-04-17 08:19:55.223298] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:23:22.408 00:23:22.408 real 0m5.983s 00:23:22.408 user 0m13.259s 00:23:22.408 sys 0m0.363s 00:23:22.408 ************************************ 00:23:22.408 END TEST event_scheduler 00:23:22.408 ************************************ 00:23:22.408 08:19:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:22.408 08:19:55 -- common/autotest_common.sh@10 -- # set +x 00:23:22.408 08:19:55 -- event/event.sh@51 -- # modprobe -n nbd 00:23:22.408 08:19:55 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:23:22.408 08:19:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:23:22.408 08:19:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:22.408 08:19:55 -- common/autotest_common.sh@10 -- # set +x 00:23:22.408 ************************************ 00:23:22.408 START TEST app_repeat 00:23:22.408 ************************************ 00:23:22.408 08:19:55 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:23:22.408 08:19:55 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:22.408 08:19:55 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:22.408 08:19:55 -- event/event.sh@13 -- # local nbd_list 00:23:22.408 08:19:55 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:23:22.408 08:19:55 -- event/event.sh@14 -- # local bdev_list 00:23:22.408 08:19:55 -- event/event.sh@15 -- # local repeat_times=4 00:23:22.408 08:19:55 -- event/event.sh@17 -- # modprobe nbd 00:23:22.408 08:19:55 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:23:22.408 08:19:55 -- event/event.sh@19 -- # repeat_pid=55186 00:23:22.408 08:19:55 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:23:22.408 08:19:55 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 55186' 00:23:22.408 Process app_repeat pid: 55186 00:23:22.408 08:19:55 -- event/event.sh@23 -- # for i in {0..2} 00:23:22.408 08:19:55 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:23:22.408 spdk_app_start Round 0 00:23:22.408 08:19:55 -- event/event.sh@25 -- # waitforlisten 55186 /var/tmp/spdk-nbd.sock 00:23:22.408 08:19:55 -- common/autotest_common.sh@819 -- # '[' -z 55186 ']' 00:23:22.408 08:19:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:23:22.408 08:19:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:22.408 08:19:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:23:22.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:23:22.408 08:19:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:22.408 08:19:55 -- common/autotest_common.sh@10 -- # set +x 00:23:22.408 [2024-04-17 08:19:55.611117] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:23:22.408 [2024-04-17 08:19:55.611196] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55186 ] 00:23:22.667 [2024-04-17 08:19:55.750947] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:22.667 [2024-04-17 08:19:55.858330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:22.667 [2024-04-17 08:19:55.858343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:23.236 08:19:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:23.236 08:19:56 -- common/autotest_common.sh@852 -- # return 0 00:23:23.236 08:19:56 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:23:23.495 Malloc0 00:23:23.495 08:19:56 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:23:23.755 Malloc1 00:23:23.755 08:19:57 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:23:23.755 08:19:57 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:23.755 08:19:57 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:23:23.755 08:19:57 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:23:23.755 08:19:57 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:23.755 08:19:57 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:23:23.755 08:19:57 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:23:23.755 08:19:57 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:23.755 08:19:57 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:23:23.755 08:19:57 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:23.755 08:19:57 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:23.755 08:19:57 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:23.755 08:19:57 -- bdev/nbd_common.sh@12 -- # local i 00:23:23.755 08:19:57 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:23.755 08:19:57 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:23.755 08:19:57 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:23:24.016 /dev/nbd0 00:23:24.016 08:19:57 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:24.016 08:19:57 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:24.016 08:19:57 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:23:24.016 08:19:57 -- common/autotest_common.sh@857 -- # local i 00:23:24.016 08:19:57 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:23:24.016 08:19:57 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:23:24.016 08:19:57 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:23:24.016 08:19:57 -- common/autotest_common.sh@861 -- # break 00:23:24.016 08:19:57 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:23:24.016 08:19:57 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:23:24.016 08:19:57 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:23:24.016 1+0 records in 00:23:24.016 1+0 records out 00:23:24.016 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000262752 s, 15.6 MB/s 00:23:24.016 08:19:57 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:23:24.016 08:19:57 -- common/autotest_common.sh@874 -- # size=4096 00:23:24.016 08:19:57 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:23:24.016 08:19:57 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:23:24.016 08:19:57 -- common/autotest_common.sh@877 -- # return 0 00:23:24.016 08:19:57 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:24.016 08:19:57 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:24.016 08:19:57 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:23:24.277 /dev/nbd1 00:23:24.277 08:19:57 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:24.277 08:19:57 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:24.277 08:19:57 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:23:24.277 08:19:57 -- common/autotest_common.sh@857 -- # local i 00:23:24.277 08:19:57 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:23:24.277 08:19:57 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:23:24.277 08:19:57 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:23:24.277 08:19:57 -- common/autotest_common.sh@861 -- # break 00:23:24.277 08:19:57 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:23:24.277 08:19:57 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:23:24.277 08:19:57 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:23:24.277 1+0 records in 00:23:24.277 1+0 records out 00:23:24.277 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000364809 s, 11.2 MB/s 00:23:24.277 08:19:57 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:23:24.277 08:19:57 -- common/autotest_common.sh@874 -- # size=4096 00:23:24.277 08:19:57 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:23:24.277 08:19:57 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:23:24.277 08:19:57 -- common/autotest_common.sh@877 -- # return 0 00:23:24.277 08:19:57 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:24.277 08:19:57 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:24.277 08:19:57 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:23:24.277 08:19:57 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:24.277 08:19:57 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:23:24.537 08:19:57 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:23:24.537 { 00:23:24.537 "nbd_device": "/dev/nbd0", 00:23:24.537 "bdev_name": "Malloc0" 00:23:24.537 }, 00:23:24.537 { 00:23:24.537 "nbd_device": "/dev/nbd1", 00:23:24.537 "bdev_name": "Malloc1" 00:23:24.537 } 00:23:24.537 ]' 00:23:24.537 08:19:57 -- bdev/nbd_common.sh@64 -- # echo '[ 00:23:24.537 { 00:23:24.537 "nbd_device": "/dev/nbd0", 00:23:24.537 "bdev_name": "Malloc0" 00:23:24.537 }, 00:23:24.537 { 00:23:24.537 "nbd_device": "/dev/nbd1", 00:23:24.537 "bdev_name": "Malloc1" 00:23:24.537 } 00:23:24.537 ]' 00:23:24.537 08:19:57 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:23:24.537 08:19:57 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:23:24.537 /dev/nbd1' 00:23:24.537 08:19:57 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:23:24.537 /dev/nbd1' 00:23:24.537 08:19:57 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:23:24.537 08:19:57 -- bdev/nbd_common.sh@65 -- # count=2 00:23:24.537 08:19:57 -- bdev/nbd_common.sh@66 -- # echo 2 00:23:24.537 08:19:57 -- bdev/nbd_common.sh@95 -- # count=2 00:23:24.537 08:19:57 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:23:24.537 08:19:57 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:23:24.537 08:19:57 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:24.537 08:19:57 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:23:24.537 08:19:57 -- bdev/nbd_common.sh@71 -- # local operation=write 00:23:24.537 08:19:57 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:23:24.537 08:19:57 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:23:24.537 08:19:57 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:23:24.798 256+0 records in 00:23:24.798 256+0 records out 00:23:24.798 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00627166 s, 167 MB/s 00:23:24.798 08:19:57 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:23:24.798 08:19:57 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:23:24.798 256+0 records in 00:23:24.798 256+0 records out 00:23:24.798 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0228614 s, 45.9 MB/s 00:23:24.798 08:19:57 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:23:24.798 08:19:57 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:23:24.798 256+0 records in 00:23:24.798 256+0 records out 00:23:24.798 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0246404 s, 42.6 MB/s 00:23:24.798 08:19:57 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:23:24.798 08:19:57 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:24.798 08:19:57 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:23:24.798 08:19:57 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:23:24.798 08:19:57 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:23:24.798 08:19:57 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:23:24.798 08:19:57 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:23:24.798 08:19:57 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:23:24.798 08:19:57 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:23:24.798 08:19:57 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:23:24.798 08:19:57 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:23:24.798 08:19:57 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:23:24.798 08:19:57 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:23:24.798 08:19:57 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:24.798 08:19:57 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:24.798 08:19:57 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:24.798 08:19:57 -- bdev/nbd_common.sh@51 -- # local i 00:23:24.798 08:19:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:24.798 08:19:57 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:23:25.063 08:19:58 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:25.063 08:19:58 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:25.063 08:19:58 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:25.063 08:19:58 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:25.063 08:19:58 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:25.063 08:19:58 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:25.063 08:19:58 -- bdev/nbd_common.sh@41 -- # break 00:23:25.063 08:19:58 -- bdev/nbd_common.sh@45 -- # return 0 00:23:25.063 08:19:58 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:25.063 08:19:58 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:23:25.322 08:19:58 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:25.322 08:19:58 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:25.322 08:19:58 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:25.322 08:19:58 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:25.322 08:19:58 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:25.322 08:19:58 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:25.322 08:19:58 -- bdev/nbd_common.sh@41 -- # break 00:23:25.322 08:19:58 -- bdev/nbd_common.sh@45 -- # return 0 00:23:25.322 08:19:58 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:23:25.322 08:19:58 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:25.322 08:19:58 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:23:25.581 08:19:58 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:23:25.581 08:19:58 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:23:25.581 08:19:58 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:23:25.581 08:19:58 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:23:25.581 08:19:58 -- bdev/nbd_common.sh@65 -- # echo '' 00:23:25.581 08:19:58 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:23:25.581 08:19:58 -- bdev/nbd_common.sh@65 -- # true 00:23:25.581 08:19:58 -- bdev/nbd_common.sh@65 -- # count=0 00:23:25.581 08:19:58 -- bdev/nbd_common.sh@66 -- # echo 0 00:23:25.581 08:19:58 -- bdev/nbd_common.sh@104 -- # count=0 00:23:25.581 08:19:58 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:23:25.581 08:19:58 -- bdev/nbd_common.sh@109 -- # return 0 00:23:25.581 08:19:58 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:23:25.839 08:19:59 -- event/event.sh@35 -- # sleep 3 00:23:26.098 [2024-04-17 08:19:59.208897] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:26.098 [2024-04-17 08:19:59.315233] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:26.098 [2024-04-17 08:19:59.315235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:26.098 [2024-04-17 08:19:59.358760] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:23:26.098 [2024-04-17 08:19:59.358821] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:23:29.388 spdk_app_start Round 1 00:23:29.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:23:29.388 08:20:02 -- event/event.sh@23 -- # for i in {0..2} 00:23:29.388 08:20:02 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:23:29.388 08:20:02 -- event/event.sh@25 -- # waitforlisten 55186 /var/tmp/spdk-nbd.sock 00:23:29.388 08:20:02 -- common/autotest_common.sh@819 -- # '[' -z 55186 ']' 00:23:29.388 08:20:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:23:29.388 08:20:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:29.388 08:20:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:23:29.388 08:20:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:29.388 08:20:02 -- common/autotest_common.sh@10 -- # set +x 00:23:29.388 08:20:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:29.388 08:20:02 -- common/autotest_common.sh@852 -- # return 0 00:23:29.388 08:20:02 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:23:29.388 Malloc0 00:23:29.388 08:20:02 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:23:29.647 Malloc1 00:23:29.647 08:20:02 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:23:29.647 08:20:02 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:29.647 08:20:02 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:23:29.647 08:20:02 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:23:29.647 08:20:02 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:29.647 08:20:02 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:23:29.647 08:20:02 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:23:29.647 08:20:02 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:29.647 08:20:02 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:23:29.647 08:20:02 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:29.647 08:20:02 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:29.647 08:20:02 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:29.647 08:20:02 -- bdev/nbd_common.sh@12 -- # local i 00:23:29.648 08:20:02 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:29.648 08:20:02 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:29.648 08:20:02 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:23:29.648 /dev/nbd0 00:23:29.648 08:20:02 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:29.906 08:20:02 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:29.906 08:20:02 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:23:29.906 08:20:02 -- common/autotest_common.sh@857 -- # local i 00:23:29.906 08:20:02 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:23:29.906 08:20:02 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:23:29.906 08:20:02 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:23:29.906 08:20:02 -- common/autotest_common.sh@861 -- # break 00:23:29.906 08:20:02 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:23:29.906 08:20:02 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:23:29.906 08:20:02 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:23:29.906 1+0 records in 00:23:29.906 1+0 records out 00:23:29.906 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000261973 s, 15.6 MB/s 00:23:29.906 08:20:02 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:23:29.906 08:20:02 -- common/autotest_common.sh@874 -- # size=4096 00:23:29.906 08:20:02 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:23:29.906 08:20:02 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:23:29.906 08:20:02 -- common/autotest_common.sh@877 -- # return 0 00:23:29.906 08:20:02 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:29.906 08:20:02 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:29.906 08:20:02 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:23:30.166 /dev/nbd1 00:23:30.166 08:20:03 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:30.166 08:20:03 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:30.166 08:20:03 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:23:30.166 08:20:03 -- common/autotest_common.sh@857 -- # local i 00:23:30.166 08:20:03 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:23:30.166 08:20:03 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:23:30.166 08:20:03 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:23:30.166 08:20:03 -- common/autotest_common.sh@861 -- # break 00:23:30.166 08:20:03 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:23:30.166 08:20:03 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:23:30.166 08:20:03 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:23:30.166 1+0 records in 00:23:30.166 1+0 records out 00:23:30.166 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000345144 s, 11.9 MB/s 00:23:30.166 08:20:03 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:23:30.166 08:20:03 -- common/autotest_common.sh@874 -- # size=4096 00:23:30.166 08:20:03 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:23:30.166 08:20:03 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:23:30.166 08:20:03 -- common/autotest_common.sh@877 -- # return 0 00:23:30.166 08:20:03 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:30.166 08:20:03 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:30.166 08:20:03 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:23:30.166 08:20:03 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:30.166 08:20:03 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:23:30.425 08:20:03 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:23:30.425 { 00:23:30.425 "nbd_device": "/dev/nbd0", 00:23:30.425 "bdev_name": "Malloc0" 00:23:30.425 }, 00:23:30.425 { 00:23:30.425 "nbd_device": "/dev/nbd1", 00:23:30.425 "bdev_name": "Malloc1" 00:23:30.425 } 00:23:30.425 ]' 00:23:30.425 08:20:03 -- bdev/nbd_common.sh@64 -- # echo '[ 00:23:30.425 { 00:23:30.425 "nbd_device": "/dev/nbd0", 00:23:30.425 "bdev_name": "Malloc0" 00:23:30.425 }, 00:23:30.425 { 00:23:30.425 "nbd_device": "/dev/nbd1", 00:23:30.425 "bdev_name": "Malloc1" 00:23:30.425 } 00:23:30.425 ]' 00:23:30.425 08:20:03 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:23:30.425 08:20:03 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:23:30.425 /dev/nbd1' 00:23:30.425 08:20:03 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:23:30.425 /dev/nbd1' 00:23:30.425 08:20:03 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:23:30.425 08:20:03 -- bdev/nbd_common.sh@65 -- # count=2 00:23:30.425 08:20:03 -- bdev/nbd_common.sh@66 -- # echo 2 00:23:30.425 08:20:03 -- bdev/nbd_common.sh@95 -- # count=2 00:23:30.425 08:20:03 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:23:30.425 08:20:03 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:23:30.425 08:20:03 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:30.425 08:20:03 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:23:30.425 08:20:03 -- bdev/nbd_common.sh@71 -- # local operation=write 00:23:30.425 08:20:03 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:23:30.425 08:20:03 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:23:30.425 08:20:03 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:23:30.425 256+0 records in 00:23:30.425 256+0 records out 00:23:30.425 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00629246 s, 167 MB/s 00:23:30.425 08:20:03 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:23:30.425 08:20:03 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:23:30.425 256+0 records in 00:23:30.425 256+0 records out 00:23:30.425 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0186096 s, 56.3 MB/s 00:23:30.425 08:20:03 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:23:30.425 08:20:03 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:23:30.425 256+0 records in 00:23:30.425 256+0 records out 00:23:30.425 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0198511 s, 52.8 MB/s 00:23:30.425 08:20:03 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:23:30.425 08:20:03 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:30.425 08:20:03 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:23:30.425 08:20:03 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:23:30.425 08:20:03 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:23:30.425 08:20:03 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:23:30.425 08:20:03 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:23:30.425 08:20:03 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:23:30.425 08:20:03 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:23:30.425 08:20:03 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:23:30.425 08:20:03 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:23:30.425 08:20:03 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:23:30.425 08:20:03 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:23:30.425 08:20:03 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:30.425 08:20:03 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:30.425 08:20:03 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:30.425 08:20:03 -- bdev/nbd_common.sh@51 -- # local i 00:23:30.425 08:20:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:30.425 08:20:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:23:30.684 08:20:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:30.685 08:20:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:30.685 08:20:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:30.685 08:20:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:30.685 08:20:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:30.685 08:20:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:30.685 08:20:03 -- bdev/nbd_common.sh@41 -- # break 00:23:30.685 08:20:03 -- bdev/nbd_common.sh@45 -- # return 0 00:23:30.685 08:20:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:30.685 08:20:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:23:30.944 08:20:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:30.944 08:20:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:30.944 08:20:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:30.944 08:20:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:30.944 08:20:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:30.944 08:20:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:30.944 08:20:04 -- bdev/nbd_common.sh@41 -- # break 00:23:30.944 08:20:04 -- bdev/nbd_common.sh@45 -- # return 0 00:23:30.944 08:20:04 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:23:30.944 08:20:04 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:30.944 08:20:04 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:23:31.512 08:20:04 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:23:31.512 08:20:04 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:23:31.512 08:20:04 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:23:31.512 08:20:04 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:23:31.512 08:20:04 -- bdev/nbd_common.sh@65 -- # echo '' 00:23:31.512 08:20:04 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:23:31.512 08:20:04 -- bdev/nbd_common.sh@65 -- # true 00:23:31.512 08:20:04 -- bdev/nbd_common.sh@65 -- # count=0 00:23:31.512 08:20:04 -- bdev/nbd_common.sh@66 -- # echo 0 00:23:31.512 08:20:04 -- bdev/nbd_common.sh@104 -- # count=0 00:23:31.512 08:20:04 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:23:31.512 08:20:04 -- bdev/nbd_common.sh@109 -- # return 0 00:23:31.512 08:20:04 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:23:31.775 08:20:04 -- event/event.sh@35 -- # sleep 3 00:23:31.775 [2024-04-17 08:20:05.090514] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:32.040 [2024-04-17 08:20:05.197855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:32.040 [2024-04-17 08:20:05.197863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:32.040 [2024-04-17 08:20:05.242687] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:23:32.040 [2024-04-17 08:20:05.242749] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:23:34.572 spdk_app_start Round 2 00:23:34.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:23:34.572 08:20:07 -- event/event.sh@23 -- # for i in {0..2} 00:23:34.572 08:20:07 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:23:34.572 08:20:07 -- event/event.sh@25 -- # waitforlisten 55186 /var/tmp/spdk-nbd.sock 00:23:34.572 08:20:07 -- common/autotest_common.sh@819 -- # '[' -z 55186 ']' 00:23:34.572 08:20:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:23:34.572 08:20:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:34.572 08:20:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:23:34.572 08:20:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:34.572 08:20:07 -- common/autotest_common.sh@10 -- # set +x 00:23:35.139 08:20:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:35.139 08:20:08 -- common/autotest_common.sh@852 -- # return 0 00:23:35.139 08:20:08 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:23:35.139 Malloc0 00:23:35.404 08:20:08 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:23:35.404 Malloc1 00:23:35.404 08:20:08 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:23:35.404 08:20:08 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:35.404 08:20:08 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:23:35.404 08:20:08 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:23:35.404 08:20:08 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:35.404 08:20:08 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:23:35.404 08:20:08 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:23:35.404 08:20:08 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:35.404 08:20:08 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:23:35.404 08:20:08 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:35.404 08:20:08 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:35.404 08:20:08 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:35.404 08:20:08 -- bdev/nbd_common.sh@12 -- # local i 00:23:35.404 08:20:08 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:35.404 08:20:08 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:35.404 08:20:08 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:23:35.672 /dev/nbd0 00:23:35.672 08:20:08 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:35.672 08:20:08 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:35.672 08:20:09 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:23:35.672 08:20:09 -- common/autotest_common.sh@857 -- # local i 00:23:35.672 08:20:09 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:23:35.672 08:20:09 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:23:35.672 08:20:09 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:23:35.932 08:20:09 -- common/autotest_common.sh@861 -- # break 00:23:35.932 08:20:09 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:23:35.932 08:20:09 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:23:35.932 08:20:09 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:23:35.932 1+0 records in 00:23:35.932 1+0 records out 00:23:35.932 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000246958 s, 16.6 MB/s 00:23:35.932 08:20:09 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:23:35.932 08:20:09 -- common/autotest_common.sh@874 -- # size=4096 00:23:35.932 08:20:09 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:23:35.932 08:20:09 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:23:35.932 08:20:09 -- common/autotest_common.sh@877 -- # return 0 00:23:35.932 08:20:09 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:35.932 08:20:09 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:35.932 08:20:09 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:23:35.932 /dev/nbd1 00:23:35.932 08:20:09 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:35.932 08:20:09 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:35.932 08:20:09 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:23:35.932 08:20:09 -- common/autotest_common.sh@857 -- # local i 00:23:35.932 08:20:09 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:23:35.932 08:20:09 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:23:35.932 08:20:09 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:23:35.932 08:20:09 -- common/autotest_common.sh@861 -- # break 00:23:35.932 08:20:09 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:23:35.932 08:20:09 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:23:35.932 08:20:09 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:23:35.932 1+0 records in 00:23:35.932 1+0 records out 00:23:35.932 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000240458 s, 17.0 MB/s 00:23:35.932 08:20:09 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:23:35.932 08:20:09 -- common/autotest_common.sh@874 -- # size=4096 00:23:35.932 08:20:09 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:23:35.932 08:20:09 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:23:35.932 08:20:09 -- common/autotest_common.sh@877 -- # return 0 00:23:35.932 08:20:09 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:35.932 08:20:09 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:35.932 08:20:09 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:23:35.932 08:20:09 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:35.932 08:20:09 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:23:36.191 08:20:09 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:23:36.191 { 00:23:36.191 "nbd_device": "/dev/nbd0", 00:23:36.191 "bdev_name": "Malloc0" 00:23:36.191 }, 00:23:36.191 { 00:23:36.191 "nbd_device": "/dev/nbd1", 00:23:36.191 "bdev_name": "Malloc1" 00:23:36.191 } 00:23:36.191 ]' 00:23:36.191 08:20:09 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:23:36.191 08:20:09 -- bdev/nbd_common.sh@64 -- # echo '[ 00:23:36.191 { 00:23:36.191 "nbd_device": "/dev/nbd0", 00:23:36.191 "bdev_name": "Malloc0" 00:23:36.191 }, 00:23:36.191 { 00:23:36.191 "nbd_device": "/dev/nbd1", 00:23:36.191 "bdev_name": "Malloc1" 00:23:36.191 } 00:23:36.191 ]' 00:23:36.450 08:20:09 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:23:36.450 /dev/nbd1' 00:23:36.450 08:20:09 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:23:36.450 /dev/nbd1' 00:23:36.450 08:20:09 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:23:36.450 08:20:09 -- bdev/nbd_common.sh@65 -- # count=2 00:23:36.450 08:20:09 -- bdev/nbd_common.sh@66 -- # echo 2 00:23:36.450 08:20:09 -- bdev/nbd_common.sh@95 -- # count=2 00:23:36.450 08:20:09 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:23:36.450 08:20:09 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:23:36.450 08:20:09 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:36.450 08:20:09 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:23:36.450 08:20:09 -- bdev/nbd_common.sh@71 -- # local operation=write 00:23:36.450 08:20:09 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:23:36.450 08:20:09 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:23:36.450 08:20:09 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:23:36.450 256+0 records in 00:23:36.450 256+0 records out 00:23:36.450 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.01208 s, 86.8 MB/s 00:23:36.450 08:20:09 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:23:36.450 08:20:09 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:23:36.450 256+0 records in 00:23:36.450 256+0 records out 00:23:36.450 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0174182 s, 60.2 MB/s 00:23:36.450 08:20:09 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:23:36.450 08:20:09 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:23:36.450 256+0 records in 00:23:36.450 256+0 records out 00:23:36.450 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0209763 s, 50.0 MB/s 00:23:36.450 08:20:09 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:23:36.450 08:20:09 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:36.450 08:20:09 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:23:36.450 08:20:09 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:23:36.450 08:20:09 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:23:36.450 08:20:09 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:23:36.450 08:20:09 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:23:36.450 08:20:09 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:23:36.450 08:20:09 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:23:36.450 08:20:09 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:23:36.450 08:20:09 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:23:36.450 08:20:09 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:23:36.450 08:20:09 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:23:36.450 08:20:09 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:36.450 08:20:09 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:36.450 08:20:09 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:36.450 08:20:09 -- bdev/nbd_common.sh@51 -- # local i 00:23:36.450 08:20:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:36.450 08:20:09 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:23:36.709 08:20:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:36.709 08:20:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:36.709 08:20:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:36.709 08:20:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:36.709 08:20:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:36.709 08:20:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:36.709 08:20:09 -- bdev/nbd_common.sh@41 -- # break 00:23:36.709 08:20:09 -- bdev/nbd_common.sh@45 -- # return 0 00:23:36.709 08:20:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:36.709 08:20:09 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:23:36.968 08:20:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:36.968 08:20:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:36.968 08:20:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:36.968 08:20:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:36.968 08:20:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:36.968 08:20:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:36.968 08:20:10 -- bdev/nbd_common.sh@41 -- # break 00:23:36.968 08:20:10 -- bdev/nbd_common.sh@45 -- # return 0 00:23:36.968 08:20:10 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:23:36.968 08:20:10 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:36.968 08:20:10 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:23:37.227 08:20:10 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:23:37.227 08:20:10 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:23:37.227 08:20:10 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:23:37.227 08:20:10 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:23:37.227 08:20:10 -- bdev/nbd_common.sh@65 -- # echo '' 00:23:37.227 08:20:10 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:23:37.227 08:20:10 -- bdev/nbd_common.sh@65 -- # true 00:23:37.227 08:20:10 -- bdev/nbd_common.sh@65 -- # count=0 00:23:37.227 08:20:10 -- bdev/nbd_common.sh@66 -- # echo 0 00:23:37.227 08:20:10 -- bdev/nbd_common.sh@104 -- # count=0 00:23:37.227 08:20:10 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:23:37.227 08:20:10 -- bdev/nbd_common.sh@109 -- # return 0 00:23:37.227 08:20:10 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:23:37.485 08:20:10 -- event/event.sh@35 -- # sleep 3 00:23:37.743 [2024-04-17 08:20:10.833937] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:37.743 [2024-04-17 08:20:10.938484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:37.743 [2024-04-17 08:20:10.938488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:37.743 [2024-04-17 08:20:10.981686] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:23:37.743 [2024-04-17 08:20:10.981738] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:23:41.027 08:20:13 -- event/event.sh@38 -- # waitforlisten 55186 /var/tmp/spdk-nbd.sock 00:23:41.027 08:20:13 -- common/autotest_common.sh@819 -- # '[' -z 55186 ']' 00:23:41.027 08:20:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:23:41.027 08:20:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:41.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:23:41.027 08:20:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:23:41.027 08:20:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:41.027 08:20:13 -- common/autotest_common.sh@10 -- # set +x 00:23:41.027 08:20:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:41.027 08:20:13 -- common/autotest_common.sh@852 -- # return 0 00:23:41.027 08:20:13 -- event/event.sh@39 -- # killprocess 55186 00:23:41.027 08:20:13 -- common/autotest_common.sh@926 -- # '[' -z 55186 ']' 00:23:41.027 08:20:13 -- common/autotest_common.sh@930 -- # kill -0 55186 00:23:41.027 08:20:13 -- common/autotest_common.sh@931 -- # uname 00:23:41.027 08:20:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:41.027 08:20:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 55186 00:23:41.027 killing process with pid 55186 00:23:41.027 08:20:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:41.027 08:20:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:41.027 08:20:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 55186' 00:23:41.027 08:20:13 -- common/autotest_common.sh@945 -- # kill 55186 00:23:41.027 08:20:13 -- common/autotest_common.sh@950 -- # wait 55186 00:23:41.027 spdk_app_start is called in Round 0. 00:23:41.027 Shutdown signal received, stop current app iteration 00:23:41.027 Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 reinitialization... 00:23:41.027 spdk_app_start is called in Round 1. 00:23:41.027 Shutdown signal received, stop current app iteration 00:23:41.027 Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 reinitialization... 00:23:41.027 spdk_app_start is called in Round 2. 00:23:41.027 Shutdown signal received, stop current app iteration 00:23:41.027 Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 reinitialization... 00:23:41.027 spdk_app_start is called in Round 3. 00:23:41.027 Shutdown signal received, stop current app iteration 00:23:41.027 ************************************ 00:23:41.027 END TEST app_repeat 00:23:41.027 ************************************ 00:23:41.027 08:20:14 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:23:41.027 08:20:14 -- event/event.sh@42 -- # return 0 00:23:41.027 00:23:41.027 real 0m18.585s 00:23:41.027 user 0m41.493s 00:23:41.027 sys 0m2.653s 00:23:41.027 08:20:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:41.027 08:20:14 -- common/autotest_common.sh@10 -- # set +x 00:23:41.027 08:20:14 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:23:41.027 08:20:14 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:23:41.027 08:20:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:23:41.027 08:20:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:41.027 08:20:14 -- common/autotest_common.sh@10 -- # set +x 00:23:41.027 ************************************ 00:23:41.027 START TEST cpu_locks 00:23:41.027 ************************************ 00:23:41.027 08:20:14 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:23:41.027 * Looking for test storage... 00:23:41.027 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:23:41.027 08:20:14 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:23:41.027 08:20:14 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:23:41.027 08:20:14 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:23:41.027 08:20:14 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:23:41.027 08:20:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:23:41.027 08:20:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:41.027 08:20:14 -- common/autotest_common.sh@10 -- # set +x 00:23:41.027 ************************************ 00:23:41.027 START TEST default_locks 00:23:41.027 ************************************ 00:23:41.027 08:20:14 -- common/autotest_common.sh@1104 -- # default_locks 00:23:41.027 08:20:14 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:23:41.027 08:20:14 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=55612 00:23:41.027 08:20:14 -- event/cpu_locks.sh@47 -- # waitforlisten 55612 00:23:41.027 08:20:14 -- common/autotest_common.sh@819 -- # '[' -z 55612 ']' 00:23:41.027 08:20:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:41.027 08:20:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:41.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:41.027 08:20:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:41.027 08:20:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:41.027 08:20:14 -- common/autotest_common.sh@10 -- # set +x 00:23:41.285 [2024-04-17 08:20:14.396754] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:23:41.285 [2024-04-17 08:20:14.396840] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55612 ] 00:23:41.285 [2024-04-17 08:20:14.530647] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:41.543 [2024-04-17 08:20:14.637848] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:41.543 [2024-04-17 08:20:14.638113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:42.110 08:20:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:42.110 08:20:15 -- common/autotest_common.sh@852 -- # return 0 00:23:42.110 08:20:15 -- event/cpu_locks.sh@49 -- # locks_exist 55612 00:23:42.110 08:20:15 -- event/cpu_locks.sh@22 -- # lslocks -p 55612 00:23:42.110 08:20:15 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:23:42.368 08:20:15 -- event/cpu_locks.sh@50 -- # killprocess 55612 00:23:42.368 08:20:15 -- common/autotest_common.sh@926 -- # '[' -z 55612 ']' 00:23:42.368 08:20:15 -- common/autotest_common.sh@930 -- # kill -0 55612 00:23:42.368 08:20:15 -- common/autotest_common.sh@931 -- # uname 00:23:42.368 08:20:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:42.368 08:20:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 55612 00:23:42.368 08:20:15 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:42.368 08:20:15 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:42.368 08:20:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 55612' 00:23:42.368 killing process with pid 55612 00:23:42.368 08:20:15 -- common/autotest_common.sh@945 -- # kill 55612 00:23:42.368 08:20:15 -- common/autotest_common.sh@950 -- # wait 55612 00:23:42.933 08:20:16 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 55612 00:23:42.933 08:20:16 -- common/autotest_common.sh@640 -- # local es=0 00:23:42.933 08:20:16 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 55612 00:23:42.933 08:20:16 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:23:42.933 08:20:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:42.933 08:20:16 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:23:42.933 08:20:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:42.933 08:20:16 -- common/autotest_common.sh@643 -- # waitforlisten 55612 00:23:42.933 08:20:16 -- common/autotest_common.sh@819 -- # '[' -z 55612 ']' 00:23:42.933 08:20:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:42.933 08:20:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:42.933 08:20:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:42.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:42.933 08:20:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:42.933 08:20:16 -- common/autotest_common.sh@10 -- # set +x 00:23:42.933 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (55612) - No such process 00:23:42.933 ERROR: process (pid: 55612) is no longer running 00:23:42.933 08:20:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:42.933 08:20:16 -- common/autotest_common.sh@852 -- # return 1 00:23:42.933 08:20:16 -- common/autotest_common.sh@643 -- # es=1 00:23:42.933 08:20:16 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:23:42.933 08:20:16 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:23:42.933 08:20:16 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:23:42.933 08:20:16 -- event/cpu_locks.sh@54 -- # no_locks 00:23:42.933 08:20:16 -- event/cpu_locks.sh@26 -- # lock_files=() 00:23:42.933 08:20:16 -- event/cpu_locks.sh@26 -- # local lock_files 00:23:42.933 08:20:16 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:23:42.933 00:23:42.933 real 0m1.702s 00:23:42.933 user 0m1.802s 00:23:42.933 sys 0m0.460s 00:23:42.933 08:20:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:42.933 08:20:16 -- common/autotest_common.sh@10 -- # set +x 00:23:42.933 ************************************ 00:23:42.933 END TEST default_locks 00:23:42.933 ************************************ 00:23:42.933 08:20:16 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:23:42.933 08:20:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:23:42.933 08:20:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:42.933 08:20:16 -- common/autotest_common.sh@10 -- # set +x 00:23:42.933 ************************************ 00:23:42.933 START TEST default_locks_via_rpc 00:23:42.933 ************************************ 00:23:42.933 08:20:16 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:23:42.933 08:20:16 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:23:42.933 08:20:16 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=55664 00:23:42.933 08:20:16 -- event/cpu_locks.sh@63 -- # waitforlisten 55664 00:23:42.933 08:20:16 -- common/autotest_common.sh@819 -- # '[' -z 55664 ']' 00:23:42.933 08:20:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:42.933 08:20:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:42.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:42.933 08:20:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:42.933 08:20:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:42.933 08:20:16 -- common/autotest_common.sh@10 -- # set +x 00:23:42.933 [2024-04-17 08:20:16.166474] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:23:42.933 [2024-04-17 08:20:16.167083] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55664 ] 00:23:43.190 [2024-04-17 08:20:16.298392] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:43.190 [2024-04-17 08:20:16.430666] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:43.190 [2024-04-17 08:20:16.430871] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:43.755 08:20:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:43.755 08:20:17 -- common/autotest_common.sh@852 -- # return 0 00:23:43.755 08:20:17 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:23:43.755 08:20:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:43.755 08:20:17 -- common/autotest_common.sh@10 -- # set +x 00:23:43.755 08:20:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:43.755 08:20:17 -- event/cpu_locks.sh@67 -- # no_locks 00:23:43.755 08:20:17 -- event/cpu_locks.sh@26 -- # lock_files=() 00:23:43.755 08:20:17 -- event/cpu_locks.sh@26 -- # local lock_files 00:23:43.755 08:20:17 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:23:43.755 08:20:17 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:23:43.755 08:20:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:43.755 08:20:17 -- common/autotest_common.sh@10 -- # set +x 00:23:43.755 08:20:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:43.755 08:20:17 -- event/cpu_locks.sh@71 -- # locks_exist 55664 00:23:43.755 08:20:17 -- event/cpu_locks.sh@22 -- # lslocks -p 55664 00:23:43.755 08:20:17 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:23:44.320 08:20:17 -- event/cpu_locks.sh@73 -- # killprocess 55664 00:23:44.320 08:20:17 -- common/autotest_common.sh@926 -- # '[' -z 55664 ']' 00:23:44.320 08:20:17 -- common/autotest_common.sh@930 -- # kill -0 55664 00:23:44.320 08:20:17 -- common/autotest_common.sh@931 -- # uname 00:23:44.320 08:20:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:44.320 08:20:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 55664 00:23:44.320 08:20:17 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:44.320 killing process with pid 55664 00:23:44.320 08:20:17 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:44.320 08:20:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 55664' 00:23:44.320 08:20:17 -- common/autotest_common.sh@945 -- # kill 55664 00:23:44.320 08:20:17 -- common/autotest_common.sh@950 -- # wait 55664 00:23:44.578 00:23:44.578 real 0m1.677s 00:23:44.578 user 0m1.723s 00:23:44.578 sys 0m0.488s 00:23:44.578 08:20:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:44.578 08:20:17 -- common/autotest_common.sh@10 -- # set +x 00:23:44.578 ************************************ 00:23:44.578 END TEST default_locks_via_rpc 00:23:44.578 ************************************ 00:23:44.578 08:20:17 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:23:44.578 08:20:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:23:44.578 08:20:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:44.578 08:20:17 -- common/autotest_common.sh@10 -- # set +x 00:23:44.578 ************************************ 00:23:44.578 START TEST non_locking_app_on_locked_coremask 00:23:44.578 ************************************ 00:23:44.578 08:20:17 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:23:44.578 08:20:17 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:23:44.578 08:20:17 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=55710 00:23:44.578 08:20:17 -- event/cpu_locks.sh@81 -- # waitforlisten 55710 /var/tmp/spdk.sock 00:23:44.578 08:20:17 -- common/autotest_common.sh@819 -- # '[' -z 55710 ']' 00:23:44.578 08:20:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:44.578 08:20:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:44.578 08:20:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:44.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:44.578 08:20:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:44.578 08:20:17 -- common/autotest_common.sh@10 -- # set +x 00:23:44.578 [2024-04-17 08:20:17.884060] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:23:44.578 [2024-04-17 08:20:17.884140] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55710 ] 00:23:44.835 [2024-04-17 08:20:18.009833] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:44.836 [2024-04-17 08:20:18.125117] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:44.836 [2024-04-17 08:20:18.125281] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:45.768 08:20:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:45.768 08:20:18 -- common/autotest_common.sh@852 -- # return 0 00:23:45.768 08:20:18 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=55726 00:23:45.768 08:20:18 -- event/cpu_locks.sh@85 -- # waitforlisten 55726 /var/tmp/spdk2.sock 00:23:45.768 08:20:18 -- common/autotest_common.sh@819 -- # '[' -z 55726 ']' 00:23:45.768 08:20:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:23:45.768 08:20:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:45.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:23:45.768 08:20:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:23:45.768 08:20:18 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:23:45.768 08:20:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:45.768 08:20:18 -- common/autotest_common.sh@10 -- # set +x 00:23:45.768 [2024-04-17 08:20:18.854885] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:23:45.768 [2024-04-17 08:20:18.854982] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55726 ] 00:23:45.768 [2024-04-17 08:20:18.989378] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:23:45.768 [2024-04-17 08:20:18.989447] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:46.025 [2024-04-17 08:20:19.203959] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:46.025 [2024-04-17 08:20:19.204130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:46.589 08:20:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:46.589 08:20:19 -- common/autotest_common.sh@852 -- # return 0 00:23:46.589 08:20:19 -- event/cpu_locks.sh@87 -- # locks_exist 55710 00:23:46.589 08:20:19 -- event/cpu_locks.sh@22 -- # lslocks -p 55710 00:23:46.589 08:20:19 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:23:47.525 08:20:20 -- event/cpu_locks.sh@89 -- # killprocess 55710 00:23:47.525 08:20:20 -- common/autotest_common.sh@926 -- # '[' -z 55710 ']' 00:23:47.525 08:20:20 -- common/autotest_common.sh@930 -- # kill -0 55710 00:23:47.525 08:20:20 -- common/autotest_common.sh@931 -- # uname 00:23:47.525 08:20:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:47.525 08:20:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 55710 00:23:47.525 08:20:20 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:47.525 08:20:20 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:47.525 killing process with pid 55710 00:23:47.525 08:20:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 55710' 00:23:47.525 08:20:20 -- common/autotest_common.sh@945 -- # kill 55710 00:23:47.525 08:20:20 -- common/autotest_common.sh@950 -- # wait 55710 00:23:48.094 08:20:21 -- event/cpu_locks.sh@90 -- # killprocess 55726 00:23:48.094 08:20:21 -- common/autotest_common.sh@926 -- # '[' -z 55726 ']' 00:23:48.094 08:20:21 -- common/autotest_common.sh@930 -- # kill -0 55726 00:23:48.094 08:20:21 -- common/autotest_common.sh@931 -- # uname 00:23:48.094 08:20:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:48.094 08:20:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 55726 00:23:48.094 08:20:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:48.094 08:20:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:48.094 killing process with pid 55726 00:23:48.094 08:20:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 55726' 00:23:48.094 08:20:21 -- common/autotest_common.sh@945 -- # kill 55726 00:23:48.094 08:20:21 -- common/autotest_common.sh@950 -- # wait 55726 00:23:48.662 00:23:48.662 real 0m3.890s 00:23:48.662 user 0m4.304s 00:23:48.662 sys 0m1.046s 00:23:48.662 08:20:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:48.662 08:20:21 -- common/autotest_common.sh@10 -- # set +x 00:23:48.662 ************************************ 00:23:48.662 END TEST non_locking_app_on_locked_coremask 00:23:48.662 ************************************ 00:23:48.662 08:20:21 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:23:48.662 08:20:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:23:48.662 08:20:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:48.662 08:20:21 -- common/autotest_common.sh@10 -- # set +x 00:23:48.662 ************************************ 00:23:48.662 START TEST locking_app_on_unlocked_coremask 00:23:48.662 ************************************ 00:23:48.662 08:20:21 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:23:48.662 08:20:21 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=55795 00:23:48.662 08:20:21 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:23:48.662 08:20:21 -- event/cpu_locks.sh@99 -- # waitforlisten 55795 /var/tmp/spdk.sock 00:23:48.662 08:20:21 -- common/autotest_common.sh@819 -- # '[' -z 55795 ']' 00:23:48.662 08:20:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:48.662 08:20:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:48.662 08:20:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:48.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:48.662 08:20:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:48.662 08:20:21 -- common/autotest_common.sh@10 -- # set +x 00:23:48.662 [2024-04-17 08:20:21.837386] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:23:48.662 [2024-04-17 08:20:21.837476] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55795 ] 00:23:48.662 [2024-04-17 08:20:21.977621] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:23:48.662 [2024-04-17 08:20:21.977710] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:48.921 [2024-04-17 08:20:22.083503] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:48.921 [2024-04-17 08:20:22.083671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:49.490 08:20:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:49.490 08:20:22 -- common/autotest_common.sh@852 -- # return 0 00:23:49.490 08:20:22 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=55811 00:23:49.490 08:20:22 -- event/cpu_locks.sh@103 -- # waitforlisten 55811 /var/tmp/spdk2.sock 00:23:49.490 08:20:22 -- common/autotest_common.sh@819 -- # '[' -z 55811 ']' 00:23:49.490 08:20:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:23:49.490 08:20:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:49.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:23:49.490 08:20:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:23:49.490 08:20:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:49.490 08:20:22 -- common/autotest_common.sh@10 -- # set +x 00:23:49.490 08:20:22 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:23:49.490 [2024-04-17 08:20:22.788482] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:23:49.490 [2024-04-17 08:20:22.788558] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55811 ] 00:23:49.748 [2024-04-17 08:20:22.916803] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:50.007 [2024-04-17 08:20:23.113041] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:50.007 [2024-04-17 08:20:23.113187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:50.576 08:20:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:50.576 08:20:23 -- common/autotest_common.sh@852 -- # return 0 00:23:50.576 08:20:23 -- event/cpu_locks.sh@105 -- # locks_exist 55811 00:23:50.576 08:20:23 -- event/cpu_locks.sh@22 -- # lslocks -p 55811 00:23:50.576 08:20:23 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:23:51.144 08:20:24 -- event/cpu_locks.sh@107 -- # killprocess 55795 00:23:51.144 08:20:24 -- common/autotest_common.sh@926 -- # '[' -z 55795 ']' 00:23:51.144 08:20:24 -- common/autotest_common.sh@930 -- # kill -0 55795 00:23:51.144 08:20:24 -- common/autotest_common.sh@931 -- # uname 00:23:51.144 08:20:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:51.144 08:20:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 55795 00:23:51.144 08:20:24 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:51.144 08:20:24 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:51.144 08:20:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 55795' 00:23:51.144 killing process with pid 55795 00:23:51.144 08:20:24 -- common/autotest_common.sh@945 -- # kill 55795 00:23:51.144 08:20:24 -- common/autotest_common.sh@950 -- # wait 55795 00:23:52.081 08:20:25 -- event/cpu_locks.sh@108 -- # killprocess 55811 00:23:52.081 08:20:25 -- common/autotest_common.sh@926 -- # '[' -z 55811 ']' 00:23:52.081 08:20:25 -- common/autotest_common.sh@930 -- # kill -0 55811 00:23:52.081 08:20:25 -- common/autotest_common.sh@931 -- # uname 00:23:52.081 08:20:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:52.081 08:20:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 55811 00:23:52.081 killing process with pid 55811 00:23:52.081 08:20:25 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:52.081 08:20:25 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:52.081 08:20:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 55811' 00:23:52.081 08:20:25 -- common/autotest_common.sh@945 -- # kill 55811 00:23:52.081 08:20:25 -- common/autotest_common.sh@950 -- # wait 55811 00:23:52.340 ************************************ 00:23:52.340 END TEST locking_app_on_unlocked_coremask 00:23:52.340 ************************************ 00:23:52.340 00:23:52.340 real 0m3.743s 00:23:52.340 user 0m4.056s 00:23:52.340 sys 0m1.004s 00:23:52.340 08:20:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:52.340 08:20:25 -- common/autotest_common.sh@10 -- # set +x 00:23:52.340 08:20:25 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:23:52.340 08:20:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:23:52.340 08:20:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:52.340 08:20:25 -- common/autotest_common.sh@10 -- # set +x 00:23:52.340 ************************************ 00:23:52.340 START TEST locking_app_on_locked_coremask 00:23:52.340 ************************************ 00:23:52.340 08:20:25 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:23:52.340 08:20:25 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=55872 00:23:52.340 08:20:25 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:23:52.340 08:20:25 -- event/cpu_locks.sh@116 -- # waitforlisten 55872 /var/tmp/spdk.sock 00:23:52.340 08:20:25 -- common/autotest_common.sh@819 -- # '[' -z 55872 ']' 00:23:52.340 08:20:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:52.340 08:20:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:52.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:52.340 08:20:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:52.340 08:20:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:52.340 08:20:25 -- common/autotest_common.sh@10 -- # set +x 00:23:52.340 [2024-04-17 08:20:25.627706] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:23:52.340 [2024-04-17 08:20:25.627819] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55872 ] 00:23:52.600 [2024-04-17 08:20:25.771764] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:52.600 [2024-04-17 08:20:25.875970] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:52.600 [2024-04-17 08:20:25.876148] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:53.171 08:20:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:53.171 08:20:26 -- common/autotest_common.sh@852 -- # return 0 00:23:53.171 08:20:26 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:23:53.171 08:20:26 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=55888 00:23:53.171 08:20:26 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 55888 /var/tmp/spdk2.sock 00:23:53.171 08:20:26 -- common/autotest_common.sh@640 -- # local es=0 00:23:53.171 08:20:26 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 55888 /var/tmp/spdk2.sock 00:23:53.171 08:20:26 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:23:53.171 08:20:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:53.171 08:20:26 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:23:53.171 08:20:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:53.171 08:20:26 -- common/autotest_common.sh@643 -- # waitforlisten 55888 /var/tmp/spdk2.sock 00:23:53.171 08:20:26 -- common/autotest_common.sh@819 -- # '[' -z 55888 ']' 00:23:53.171 08:20:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:23:53.171 08:20:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:53.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:23:53.171 08:20:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:23:53.171 08:20:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:53.171 08:20:26 -- common/autotest_common.sh@10 -- # set +x 00:23:53.430 [2024-04-17 08:20:26.529427] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:23:53.430 [2024-04-17 08:20:26.529871] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55888 ] 00:23:53.430 [2024-04-17 08:20:26.667457] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 55872 has claimed it. 00:23:53.431 [2024-04-17 08:20:26.667524] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:23:53.999 ERROR: process (pid: 55888) is no longer running 00:23:53.999 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (55888) - No such process 00:23:53.999 08:20:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:53.999 08:20:27 -- common/autotest_common.sh@852 -- # return 1 00:23:53.999 08:20:27 -- common/autotest_common.sh@643 -- # es=1 00:23:53.999 08:20:27 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:23:53.999 08:20:27 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:23:53.999 08:20:27 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:23:53.999 08:20:27 -- event/cpu_locks.sh@122 -- # locks_exist 55872 00:23:53.999 08:20:27 -- event/cpu_locks.sh@22 -- # lslocks -p 55872 00:23:53.999 08:20:27 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:23:54.258 08:20:27 -- event/cpu_locks.sh@124 -- # killprocess 55872 00:23:54.258 08:20:27 -- common/autotest_common.sh@926 -- # '[' -z 55872 ']' 00:23:54.258 08:20:27 -- common/autotest_common.sh@930 -- # kill -0 55872 00:23:54.258 08:20:27 -- common/autotest_common.sh@931 -- # uname 00:23:54.258 08:20:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:54.258 08:20:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 55872 00:23:54.258 08:20:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:54.258 08:20:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:54.258 killing process with pid 55872 00:23:54.258 08:20:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 55872' 00:23:54.258 08:20:27 -- common/autotest_common.sh@945 -- # kill 55872 00:23:54.258 08:20:27 -- common/autotest_common.sh@950 -- # wait 55872 00:23:54.516 00:23:54.516 real 0m2.259s 00:23:54.516 user 0m2.503s 00:23:54.516 sys 0m0.509s 00:23:54.516 08:20:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:54.516 08:20:27 -- common/autotest_common.sh@10 -- # set +x 00:23:54.517 ************************************ 00:23:54.517 END TEST locking_app_on_locked_coremask 00:23:54.517 ************************************ 00:23:54.775 08:20:27 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:23:54.775 08:20:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:23:54.775 08:20:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:54.775 08:20:27 -- common/autotest_common.sh@10 -- # set +x 00:23:54.775 ************************************ 00:23:54.775 START TEST locking_overlapped_coremask 00:23:54.775 ************************************ 00:23:54.775 08:20:27 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:23:54.775 08:20:27 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=55934 00:23:54.775 08:20:27 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:23:54.775 08:20:27 -- event/cpu_locks.sh@133 -- # waitforlisten 55934 /var/tmp/spdk.sock 00:23:54.775 08:20:27 -- common/autotest_common.sh@819 -- # '[' -z 55934 ']' 00:23:54.775 08:20:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:54.775 08:20:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:54.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:54.775 08:20:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:54.775 08:20:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:54.775 08:20:27 -- common/autotest_common.sh@10 -- # set +x 00:23:54.775 [2024-04-17 08:20:27.952735] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:23:54.775 [2024-04-17 08:20:27.952822] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55934 ] 00:23:54.775 [2024-04-17 08:20:28.078887] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:55.033 [2024-04-17 08:20:28.185725] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:55.033 [2024-04-17 08:20:28.186026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:55.033 [2024-04-17 08:20:28.186068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:55.034 [2024-04-17 08:20:28.186072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:55.600 08:20:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:55.600 08:20:28 -- common/autotest_common.sh@852 -- # return 0 00:23:55.600 08:20:28 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:23:55.600 08:20:28 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=55952 00:23:55.600 08:20:28 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 55952 /var/tmp/spdk2.sock 00:23:55.600 08:20:28 -- common/autotest_common.sh@640 -- # local es=0 00:23:55.600 08:20:28 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 55952 /var/tmp/spdk2.sock 00:23:55.600 08:20:28 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:23:55.600 08:20:28 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:55.600 08:20:28 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:23:55.600 08:20:28 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:55.600 08:20:28 -- common/autotest_common.sh@643 -- # waitforlisten 55952 /var/tmp/spdk2.sock 00:23:55.600 08:20:28 -- common/autotest_common.sh@819 -- # '[' -z 55952 ']' 00:23:55.600 08:20:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:23:55.600 08:20:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:55.600 08:20:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:23:55.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:23:55.600 08:20:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:55.600 08:20:28 -- common/autotest_common.sh@10 -- # set +x 00:23:55.859 [2024-04-17 08:20:28.982791] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:23:55.859 [2024-04-17 08:20:28.982909] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55952 ] 00:23:55.859 [2024-04-17 08:20:29.119493] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 55934 has claimed it. 00:23:55.859 [2024-04-17 08:20:29.119584] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:23:56.467 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (55952) - No such process 00:23:56.467 ERROR: process (pid: 55952) is no longer running 00:23:56.467 08:20:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:56.467 08:20:29 -- common/autotest_common.sh@852 -- # return 1 00:23:56.467 08:20:29 -- common/autotest_common.sh@643 -- # es=1 00:23:56.467 08:20:29 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:23:56.467 08:20:29 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:23:56.467 08:20:29 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:23:56.467 08:20:29 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:23:56.467 08:20:29 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:23:56.467 08:20:29 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:23:56.467 08:20:29 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:23:56.467 08:20:29 -- event/cpu_locks.sh@141 -- # killprocess 55934 00:23:56.467 08:20:29 -- common/autotest_common.sh@926 -- # '[' -z 55934 ']' 00:23:56.467 08:20:29 -- common/autotest_common.sh@930 -- # kill -0 55934 00:23:56.467 08:20:29 -- common/autotest_common.sh@931 -- # uname 00:23:56.467 08:20:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:56.467 08:20:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 55934 00:23:56.467 08:20:29 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:56.467 08:20:29 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:56.467 08:20:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 55934' 00:23:56.467 killing process with pid 55934 00:23:56.467 08:20:29 -- common/autotest_common.sh@945 -- # kill 55934 00:23:56.467 08:20:29 -- common/autotest_common.sh@950 -- # wait 55934 00:23:56.728 00:23:56.728 real 0m2.136s 00:23:56.728 user 0m5.895s 00:23:56.728 sys 0m0.380s 00:23:56.728 08:20:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:56.728 08:20:30 -- common/autotest_common.sh@10 -- # set +x 00:23:56.728 ************************************ 00:23:56.728 END TEST locking_overlapped_coremask 00:23:56.728 ************************************ 00:23:56.987 08:20:30 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:23:56.987 08:20:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:23:56.987 08:20:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:56.987 08:20:30 -- common/autotest_common.sh@10 -- # set +x 00:23:56.987 ************************************ 00:23:56.987 START TEST locking_overlapped_coremask_via_rpc 00:23:56.987 ************************************ 00:23:56.987 08:20:30 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:23:56.987 08:20:30 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=55992 00:23:56.987 08:20:30 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:23:56.987 08:20:30 -- event/cpu_locks.sh@149 -- # waitforlisten 55992 /var/tmp/spdk.sock 00:23:56.987 08:20:30 -- common/autotest_common.sh@819 -- # '[' -z 55992 ']' 00:23:56.987 08:20:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:56.987 08:20:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:56.987 08:20:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:56.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:56.987 08:20:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:56.987 08:20:30 -- common/autotest_common.sh@10 -- # set +x 00:23:56.987 [2024-04-17 08:20:30.147241] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:23:56.987 [2024-04-17 08:20:30.147350] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55992 ] 00:23:56.987 [2024-04-17 08:20:30.288170] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:23:56.987 [2024-04-17 08:20:30.288253] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:57.247 [2024-04-17 08:20:30.395927] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:57.247 [2024-04-17 08:20:30.396208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:57.247 [2024-04-17 08:20:30.396383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:57.247 [2024-04-17 08:20:30.396387] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:57.815 08:20:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:57.815 08:20:31 -- common/autotest_common.sh@852 -- # return 0 00:23:57.815 08:20:31 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=56010 00:23:57.815 08:20:31 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:23:57.815 08:20:31 -- event/cpu_locks.sh@153 -- # waitforlisten 56010 /var/tmp/spdk2.sock 00:23:57.815 08:20:31 -- common/autotest_common.sh@819 -- # '[' -z 56010 ']' 00:23:57.815 08:20:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:23:57.815 08:20:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:57.815 08:20:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:23:57.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:23:57.815 08:20:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:57.815 08:20:31 -- common/autotest_common.sh@10 -- # set +x 00:23:57.815 [2024-04-17 08:20:31.115646] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:23:57.815 [2024-04-17 08:20:31.115798] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56010 ] 00:23:58.073 [2024-04-17 08:20:31.260572] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:23:58.074 [2024-04-17 08:20:31.260637] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:58.333 [2024-04-17 08:20:31.462843] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:58.333 [2024-04-17 08:20:31.463181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:58.333 [2024-04-17 08:20:31.466481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:58.333 [2024-04-17 08:20:31.466487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:23:58.901 08:20:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:58.901 08:20:31 -- common/autotest_common.sh@852 -- # return 0 00:23:58.901 08:20:31 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:23:58.901 08:20:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:58.901 08:20:31 -- common/autotest_common.sh@10 -- # set +x 00:23:58.901 08:20:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:58.901 08:20:31 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:23:58.901 08:20:31 -- common/autotest_common.sh@640 -- # local es=0 00:23:58.901 08:20:31 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:23:58.901 08:20:31 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:23:58.901 08:20:31 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:58.901 08:20:31 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:23:58.901 08:20:31 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:58.901 08:20:31 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:23:58.901 08:20:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:58.901 08:20:31 -- common/autotest_common.sh@10 -- # set +x 00:23:58.901 [2024-04-17 08:20:31.988406] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 55992 has claimed it. 00:23:58.901 request: 00:23:58.901 { 00:23:58.901 "method": "framework_enable_cpumask_locks", 00:23:58.901 "req_id": 1 00:23:58.901 } 00:23:58.901 Got JSON-RPC error response 00:23:58.901 response: 00:23:58.901 { 00:23:58.901 "code": -32603, 00:23:58.901 "message": "Failed to claim CPU core: 2" 00:23:58.901 } 00:23:58.901 08:20:31 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:23:58.901 08:20:31 -- common/autotest_common.sh@643 -- # es=1 00:23:58.901 08:20:31 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:23:58.901 08:20:31 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:23:58.901 08:20:31 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:23:58.901 08:20:31 -- event/cpu_locks.sh@158 -- # waitforlisten 55992 /var/tmp/spdk.sock 00:23:58.901 08:20:31 -- common/autotest_common.sh@819 -- # '[' -z 55992 ']' 00:23:58.901 08:20:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:58.901 08:20:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:58.901 08:20:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:58.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:58.901 08:20:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:58.901 08:20:32 -- common/autotest_common.sh@10 -- # set +x 00:23:58.901 08:20:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:58.901 08:20:32 -- common/autotest_common.sh@852 -- # return 0 00:23:58.901 08:20:32 -- event/cpu_locks.sh@159 -- # waitforlisten 56010 /var/tmp/spdk2.sock 00:23:58.901 08:20:32 -- common/autotest_common.sh@819 -- # '[' -z 56010 ']' 00:23:58.901 08:20:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:23:58.901 08:20:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:58.901 08:20:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:23:58.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:23:58.901 08:20:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:58.901 08:20:32 -- common/autotest_common.sh@10 -- # set +x 00:23:59.199 ************************************ 00:23:59.199 END TEST locking_overlapped_coremask_via_rpc 00:23:59.199 ************************************ 00:23:59.199 08:20:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:59.199 08:20:32 -- common/autotest_common.sh@852 -- # return 0 00:23:59.199 08:20:32 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:23:59.199 08:20:32 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:23:59.199 08:20:32 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:23:59.199 08:20:32 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:23:59.199 00:23:59.199 real 0m2.366s 00:23:59.199 user 0m1.141s 00:23:59.199 sys 0m0.160s 00:23:59.199 08:20:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:59.199 08:20:32 -- common/autotest_common.sh@10 -- # set +x 00:23:59.199 08:20:32 -- event/cpu_locks.sh@174 -- # cleanup 00:23:59.199 08:20:32 -- event/cpu_locks.sh@15 -- # [[ -z 55992 ]] 00:23:59.199 08:20:32 -- event/cpu_locks.sh@15 -- # killprocess 55992 00:23:59.199 08:20:32 -- common/autotest_common.sh@926 -- # '[' -z 55992 ']' 00:23:59.199 08:20:32 -- common/autotest_common.sh@930 -- # kill -0 55992 00:23:59.199 08:20:32 -- common/autotest_common.sh@931 -- # uname 00:23:59.199 08:20:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:59.199 08:20:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 55992 00:23:59.457 killing process with pid 55992 00:23:59.457 08:20:32 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:59.457 08:20:32 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:59.457 08:20:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 55992' 00:23:59.457 08:20:32 -- common/autotest_common.sh@945 -- # kill 55992 00:23:59.457 08:20:32 -- common/autotest_common.sh@950 -- # wait 55992 00:23:59.716 08:20:32 -- event/cpu_locks.sh@16 -- # [[ -z 56010 ]] 00:23:59.716 08:20:32 -- event/cpu_locks.sh@16 -- # killprocess 56010 00:23:59.716 08:20:32 -- common/autotest_common.sh@926 -- # '[' -z 56010 ']' 00:23:59.716 08:20:32 -- common/autotest_common.sh@930 -- # kill -0 56010 00:23:59.716 08:20:32 -- common/autotest_common.sh@931 -- # uname 00:23:59.716 08:20:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:59.716 08:20:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 56010 00:23:59.716 killing process with pid 56010 00:23:59.716 08:20:32 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:23:59.716 08:20:32 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:23:59.716 08:20:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 56010' 00:23:59.716 08:20:32 -- common/autotest_common.sh@945 -- # kill 56010 00:23:59.716 08:20:32 -- common/autotest_common.sh@950 -- # wait 56010 00:23:59.975 08:20:33 -- event/cpu_locks.sh@18 -- # rm -f 00:24:00.233 Process with pid 55992 is not found 00:24:00.233 08:20:33 -- event/cpu_locks.sh@1 -- # cleanup 00:24:00.233 08:20:33 -- event/cpu_locks.sh@15 -- # [[ -z 55992 ]] 00:24:00.233 08:20:33 -- event/cpu_locks.sh@15 -- # killprocess 55992 00:24:00.233 08:20:33 -- common/autotest_common.sh@926 -- # '[' -z 55992 ']' 00:24:00.233 08:20:33 -- common/autotest_common.sh@930 -- # kill -0 55992 00:24:00.233 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (55992) - No such process 00:24:00.233 08:20:33 -- common/autotest_common.sh@953 -- # echo 'Process with pid 55992 is not found' 00:24:00.233 08:20:33 -- event/cpu_locks.sh@16 -- # [[ -z 56010 ]] 00:24:00.233 08:20:33 -- event/cpu_locks.sh@16 -- # killprocess 56010 00:24:00.233 08:20:33 -- common/autotest_common.sh@926 -- # '[' -z 56010 ']' 00:24:00.233 08:20:33 -- common/autotest_common.sh@930 -- # kill -0 56010 00:24:00.233 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (56010) - No such process 00:24:00.233 Process with pid 56010 is not found 00:24:00.233 08:20:33 -- common/autotest_common.sh@953 -- # echo 'Process with pid 56010 is not found' 00:24:00.233 08:20:33 -- event/cpu_locks.sh@18 -- # rm -f 00:24:00.233 00:24:00.233 real 0m19.087s 00:24:00.233 user 0m32.666s 00:24:00.233 sys 0m4.881s 00:24:00.233 08:20:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:00.233 08:20:33 -- common/autotest_common.sh@10 -- # set +x 00:24:00.233 ************************************ 00:24:00.233 END TEST cpu_locks 00:24:00.233 ************************************ 00:24:00.233 00:24:00.233 real 0m48.315s 00:24:00.233 user 1m34.210s 00:24:00.233 sys 0m8.422s 00:24:00.233 08:20:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:00.234 08:20:33 -- common/autotest_common.sh@10 -- # set +x 00:24:00.234 ************************************ 00:24:00.234 END TEST event 00:24:00.234 ************************************ 00:24:00.234 08:20:33 -- spdk/autotest.sh@188 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:24:00.234 08:20:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:24:00.234 08:20:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:00.234 08:20:33 -- common/autotest_common.sh@10 -- # set +x 00:24:00.234 ************************************ 00:24:00.234 START TEST thread 00:24:00.234 ************************************ 00:24:00.234 08:20:33 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:24:00.234 * Looking for test storage... 00:24:00.234 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:24:00.234 08:20:33 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:24:00.234 08:20:33 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:24:00.234 08:20:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:00.234 08:20:33 -- common/autotest_common.sh@10 -- # set +x 00:24:00.234 ************************************ 00:24:00.234 START TEST thread_poller_perf 00:24:00.234 ************************************ 00:24:00.234 08:20:33 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:24:00.234 [2024-04-17 08:20:33.547646] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:24:00.234 [2024-04-17 08:20:33.547770] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56126 ] 00:24:00.492 [2024-04-17 08:20:33.675540] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:00.492 [2024-04-17 08:20:33.787793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:00.492 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:24:01.883 ====================================== 00:24:01.883 busy:2301823796 (cyc) 00:24:01.883 total_run_count: 316000 00:24:01.883 tsc_hz: 2290000000 (cyc) 00:24:01.883 ====================================== 00:24:01.883 poller_cost: 7284 (cyc), 3180 (nsec) 00:24:01.883 00:24:01.883 real 0m1.382s 00:24:01.883 user 0m1.215s 00:24:01.883 sys 0m0.054s 00:24:01.883 08:20:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:01.883 08:20:34 -- common/autotest_common.sh@10 -- # set +x 00:24:01.883 ************************************ 00:24:01.883 END TEST thread_poller_perf 00:24:01.883 ************************************ 00:24:01.883 08:20:34 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:24:01.883 08:20:34 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:24:01.883 08:20:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:01.883 08:20:34 -- common/autotest_common.sh@10 -- # set +x 00:24:01.883 ************************************ 00:24:01.883 START TEST thread_poller_perf 00:24:01.883 ************************************ 00:24:01.883 08:20:34 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:24:01.883 [2024-04-17 08:20:34.984682] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:24:01.883 [2024-04-17 08:20:34.984810] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56167 ] 00:24:01.883 [2024-04-17 08:20:35.123495] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:02.142 [2024-04-17 08:20:35.230142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:02.142 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:24:03.081 ====================================== 00:24:03.081 busy:2292999222 (cyc) 00:24:03.081 total_run_count: 4202000 00:24:03.081 tsc_hz: 2290000000 (cyc) 00:24:03.081 ====================================== 00:24:03.081 poller_cost: 545 (cyc), 237 (nsec) 00:24:03.081 00:24:03.081 real 0m1.377s 00:24:03.081 user 0m1.207s 00:24:03.081 sys 0m0.060s 00:24:03.081 08:20:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:03.081 08:20:36 -- common/autotest_common.sh@10 -- # set +x 00:24:03.081 ************************************ 00:24:03.081 END TEST thread_poller_perf 00:24:03.081 ************************************ 00:24:03.081 08:20:36 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:24:03.081 ************************************ 00:24:03.081 END TEST thread 00:24:03.081 ************************************ 00:24:03.081 00:24:03.081 real 0m2.967s 00:24:03.081 user 0m2.487s 00:24:03.081 sys 0m0.264s 00:24:03.081 08:20:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:03.081 08:20:36 -- common/autotest_common.sh@10 -- # set +x 00:24:03.341 08:20:36 -- spdk/autotest.sh@189 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:24:03.341 08:20:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:24:03.341 08:20:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:03.341 08:20:36 -- common/autotest_common.sh@10 -- # set +x 00:24:03.341 ************************************ 00:24:03.341 START TEST accel 00:24:03.341 ************************************ 00:24:03.341 08:20:36 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:24:03.341 * Looking for test storage... 00:24:03.341 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:24:03.341 08:20:36 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:24:03.341 08:20:36 -- accel/accel.sh@74 -- # get_expected_opcs 00:24:03.341 08:20:36 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:24:03.341 08:20:36 -- accel/accel.sh@59 -- # spdk_tgt_pid=56235 00:24:03.341 08:20:36 -- accel/accel.sh@60 -- # waitforlisten 56235 00:24:03.341 08:20:36 -- common/autotest_common.sh@819 -- # '[' -z 56235 ']' 00:24:03.341 08:20:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:03.341 08:20:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:03.341 08:20:36 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:24:03.341 08:20:36 -- accel/accel.sh@58 -- # build_accel_config 00:24:03.341 08:20:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:03.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:03.341 08:20:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:03.341 08:20:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:24:03.341 08:20:36 -- common/autotest_common.sh@10 -- # set +x 00:24:03.341 08:20:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:03.341 08:20:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:03.341 08:20:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:24:03.341 08:20:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:24:03.341 08:20:36 -- accel/accel.sh@41 -- # local IFS=, 00:24:03.341 08:20:36 -- accel/accel.sh@42 -- # jq -r . 00:24:03.341 [2024-04-17 08:20:36.595201] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:24:03.341 [2024-04-17 08:20:36.595296] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56235 ] 00:24:03.601 [2024-04-17 08:20:36.732854] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:03.601 [2024-04-17 08:20:36.835699] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:03.601 [2024-04-17 08:20:36.835839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:04.170 08:20:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:04.170 08:20:37 -- common/autotest_common.sh@852 -- # return 0 00:24:04.170 08:20:37 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:24:04.170 08:20:37 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:24:04.170 08:20:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:04.170 08:20:37 -- common/autotest_common.sh@10 -- # set +x 00:24:04.170 08:20:37 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:24:04.170 08:20:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:04.170 08:20:37 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:24:04.170 08:20:37 -- accel/accel.sh@64 -- # IFS== 00:24:04.170 08:20:37 -- accel/accel.sh@64 -- # read -r opc module 00:24:04.170 08:20:37 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:24:04.170 08:20:37 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:24:04.170 08:20:37 -- accel/accel.sh@64 -- # IFS== 00:24:04.170 08:20:37 -- accel/accel.sh@64 -- # read -r opc module 00:24:04.170 08:20:37 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:24:04.170 08:20:37 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:24:04.170 08:20:37 -- accel/accel.sh@64 -- # IFS== 00:24:04.170 08:20:37 -- accel/accel.sh@64 -- # read -r opc module 00:24:04.170 08:20:37 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:24:04.170 08:20:37 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:24:04.170 08:20:37 -- accel/accel.sh@64 -- # IFS== 00:24:04.170 08:20:37 -- accel/accel.sh@64 -- # read -r opc module 00:24:04.170 08:20:37 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:24:04.170 08:20:37 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:24:04.170 08:20:37 -- accel/accel.sh@64 -- # IFS== 00:24:04.170 08:20:37 -- accel/accel.sh@64 -- # read -r opc module 00:24:04.170 08:20:37 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:24:04.170 08:20:37 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:24:04.170 08:20:37 -- accel/accel.sh@64 -- # IFS== 00:24:04.170 08:20:37 -- accel/accel.sh@64 -- # read -r opc module 00:24:04.170 08:20:37 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:24:04.170 08:20:37 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:24:04.170 08:20:37 -- accel/accel.sh@64 -- # IFS== 00:24:04.170 08:20:37 -- accel/accel.sh@64 -- # read -r opc module 00:24:04.170 08:20:37 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:24:04.170 08:20:37 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:24:04.170 08:20:37 -- accel/accel.sh@64 -- # IFS== 00:24:04.170 08:20:37 -- accel/accel.sh@64 -- # read -r opc module 00:24:04.170 08:20:37 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:24:04.170 08:20:37 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:24:04.170 08:20:37 -- accel/accel.sh@64 -- # IFS== 00:24:04.170 08:20:37 -- accel/accel.sh@64 -- # read -r opc module 00:24:04.170 08:20:37 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:24:04.170 08:20:37 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:24:04.170 08:20:37 -- accel/accel.sh@64 -- # IFS== 00:24:04.170 08:20:37 -- accel/accel.sh@64 -- # read -r opc module 00:24:04.170 08:20:37 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:24:04.170 08:20:37 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:24:04.170 08:20:37 -- accel/accel.sh@64 -- # IFS== 00:24:04.170 08:20:37 -- accel/accel.sh@64 -- # read -r opc module 00:24:04.430 08:20:37 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:24:04.430 08:20:37 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:24:04.430 08:20:37 -- accel/accel.sh@64 -- # IFS== 00:24:04.430 08:20:37 -- accel/accel.sh@64 -- # read -r opc module 00:24:04.430 08:20:37 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:24:04.430 08:20:37 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:24:04.430 08:20:37 -- accel/accel.sh@64 -- # IFS== 00:24:04.430 08:20:37 -- accel/accel.sh@64 -- # read -r opc module 00:24:04.430 08:20:37 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:24:04.430 08:20:37 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:24:04.430 08:20:37 -- accel/accel.sh@64 -- # IFS== 00:24:04.430 08:20:37 -- accel/accel.sh@64 -- # read -r opc module 00:24:04.430 08:20:37 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:24:04.430 08:20:37 -- accel/accel.sh@67 -- # killprocess 56235 00:24:04.430 08:20:37 -- common/autotest_common.sh@926 -- # '[' -z 56235 ']' 00:24:04.430 08:20:37 -- common/autotest_common.sh@930 -- # kill -0 56235 00:24:04.430 08:20:37 -- common/autotest_common.sh@931 -- # uname 00:24:04.430 08:20:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:04.430 08:20:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 56235 00:24:04.430 08:20:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:04.430 08:20:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:04.430 killing process with pid 56235 00:24:04.430 08:20:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 56235' 00:24:04.430 08:20:37 -- common/autotest_common.sh@945 -- # kill 56235 00:24:04.430 08:20:37 -- common/autotest_common.sh@950 -- # wait 56235 00:24:04.688 08:20:37 -- accel/accel.sh@68 -- # trap - ERR 00:24:04.688 08:20:37 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:24:04.688 08:20:37 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:04.688 08:20:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:04.688 08:20:37 -- common/autotest_common.sh@10 -- # set +x 00:24:04.688 08:20:37 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:24:04.688 08:20:37 -- accel/accel.sh@12 -- # build_accel_config 00:24:04.688 08:20:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:24:04.688 08:20:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:24:04.688 08:20:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:04.688 08:20:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:04.688 08:20:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:24:04.688 08:20:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:24:04.688 08:20:37 -- accel/accel.sh@41 -- # local IFS=, 00:24:04.688 08:20:37 -- accel/accel.sh@42 -- # jq -r . 00:24:04.688 08:20:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:04.688 08:20:37 -- common/autotest_common.sh@10 -- # set +x 00:24:04.688 08:20:37 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:24:04.688 08:20:37 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:24:04.688 08:20:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:04.688 08:20:37 -- common/autotest_common.sh@10 -- # set +x 00:24:04.688 ************************************ 00:24:04.688 START TEST accel_missing_filename 00:24:04.688 ************************************ 00:24:04.688 08:20:37 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:24:04.688 08:20:37 -- common/autotest_common.sh@640 -- # local es=0 00:24:04.688 08:20:37 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:24:04.688 08:20:37 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:24:04.688 08:20:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:24:04.688 08:20:37 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:24:04.688 08:20:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:24:04.688 08:20:37 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:24:04.688 08:20:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:24:04.688 08:20:37 -- accel/accel.sh@12 -- # build_accel_config 00:24:04.688 08:20:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:24:04.688 08:20:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:04.688 08:20:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:04.688 08:20:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:24:04.688 08:20:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:24:04.688 08:20:37 -- accel/accel.sh@41 -- # local IFS=, 00:24:04.688 08:20:37 -- accel/accel.sh@42 -- # jq -r . 00:24:04.688 [2024-04-17 08:20:37.983748] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:24:04.688 [2024-04-17 08:20:37.983848] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56292 ] 00:24:04.947 [2024-04-17 08:20:38.130014] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:04.948 [2024-04-17 08:20:38.233859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:04.948 [2024-04-17 08:20:38.277890] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:05.208 [2024-04-17 08:20:38.339105] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:24:05.208 A filename is required. 00:24:05.208 08:20:38 -- common/autotest_common.sh@643 -- # es=234 00:24:05.208 08:20:38 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:24:05.208 08:20:38 -- common/autotest_common.sh@652 -- # es=106 00:24:05.208 08:20:38 -- common/autotest_common.sh@653 -- # case "$es" in 00:24:05.208 08:20:38 -- common/autotest_common.sh@660 -- # es=1 00:24:05.208 08:20:38 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:24:05.208 00:24:05.208 real 0m0.483s 00:24:05.208 user 0m0.323s 00:24:05.208 sys 0m0.095s 00:24:05.208 08:20:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:05.208 08:20:38 -- common/autotest_common.sh@10 -- # set +x 00:24:05.208 ************************************ 00:24:05.208 END TEST accel_missing_filename 00:24:05.208 ************************************ 00:24:05.208 08:20:38 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:24:05.208 08:20:38 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:24:05.208 08:20:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:05.208 08:20:38 -- common/autotest_common.sh@10 -- # set +x 00:24:05.208 ************************************ 00:24:05.208 START TEST accel_compress_verify 00:24:05.208 ************************************ 00:24:05.208 08:20:38 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:24:05.208 08:20:38 -- common/autotest_common.sh@640 -- # local es=0 00:24:05.208 08:20:38 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:24:05.208 08:20:38 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:24:05.208 08:20:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:24:05.208 08:20:38 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:24:05.208 08:20:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:24:05.208 08:20:38 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:24:05.208 08:20:38 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:24:05.208 08:20:38 -- accel/accel.sh@12 -- # build_accel_config 00:24:05.208 08:20:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:24:05.208 08:20:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:05.208 08:20:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:05.208 08:20:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:24:05.208 08:20:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:24:05.208 08:20:38 -- accel/accel.sh@41 -- # local IFS=, 00:24:05.208 08:20:38 -- accel/accel.sh@42 -- # jq -r . 00:24:05.208 [2024-04-17 08:20:38.518344] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:24:05.208 [2024-04-17 08:20:38.518442] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56311 ] 00:24:05.468 [2024-04-17 08:20:38.658184] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:05.468 [2024-04-17 08:20:38.763951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:05.728 [2024-04-17 08:20:38.807794] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:05.728 [2024-04-17 08:20:38.868571] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:24:05.728 00:24:05.728 Compression does not support the verify option, aborting. 00:24:05.728 08:20:38 -- common/autotest_common.sh@643 -- # es=161 00:24:05.728 08:20:38 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:24:05.728 08:20:38 -- common/autotest_common.sh@652 -- # es=33 00:24:05.728 08:20:38 -- common/autotest_common.sh@653 -- # case "$es" in 00:24:05.728 08:20:38 -- common/autotest_common.sh@660 -- # es=1 00:24:05.728 08:20:38 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:24:05.728 00:24:05.728 real 0m0.488s 00:24:05.728 user 0m0.331s 00:24:05.728 sys 0m0.095s 00:24:05.728 08:20:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:05.728 08:20:38 -- common/autotest_common.sh@10 -- # set +x 00:24:05.728 ************************************ 00:24:05.728 END TEST accel_compress_verify 00:24:05.728 ************************************ 00:24:05.728 08:20:39 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:24:05.728 08:20:39 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:24:05.728 08:20:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:05.728 08:20:39 -- common/autotest_common.sh@10 -- # set +x 00:24:05.728 ************************************ 00:24:05.728 START TEST accel_wrong_workload 00:24:05.728 ************************************ 00:24:05.728 08:20:39 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:24:05.728 08:20:39 -- common/autotest_common.sh@640 -- # local es=0 00:24:05.728 08:20:39 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:24:05.728 08:20:39 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:24:05.728 08:20:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:24:05.728 08:20:39 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:24:05.728 08:20:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:24:05.728 08:20:39 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:24:05.728 08:20:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:24:05.728 08:20:39 -- accel/accel.sh@12 -- # build_accel_config 00:24:05.728 08:20:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:24:05.728 08:20:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:05.728 08:20:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:05.728 08:20:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:24:05.728 08:20:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:24:05.728 08:20:39 -- accel/accel.sh@41 -- # local IFS=, 00:24:05.728 08:20:39 -- accel/accel.sh@42 -- # jq -r . 00:24:05.728 Unsupported workload type: foobar 00:24:05.728 [2024-04-17 08:20:39.050388] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:24:05.728 accel_perf options: 00:24:05.728 [-h help message] 00:24:05.728 [-q queue depth per core] 00:24:05.728 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:24:05.728 [-T number of threads per core 00:24:05.728 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:24:05.728 [-t time in seconds] 00:24:05.728 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:24:05.728 [ dif_verify, , dif_generate, dif_generate_copy 00:24:05.728 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:24:05.728 [-l for compress/decompress workloads, name of uncompressed input file 00:24:05.728 [-S for crc32c workload, use this seed value (default 0) 00:24:05.728 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:24:05.728 [-f for fill workload, use this BYTE value (default 255) 00:24:05.728 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:24:05.728 [-y verify result if this switch is on] 00:24:05.728 [-a tasks to allocate per core (default: same value as -q)] 00:24:05.728 Can be used to spread operations across a wider range of memory. 00:24:05.728 08:20:39 -- common/autotest_common.sh@643 -- # es=1 00:24:05.728 08:20:39 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:24:05.728 08:20:39 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:24:05.728 08:20:39 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:24:05.728 00:24:05.728 real 0m0.042s 00:24:05.728 user 0m0.023s 00:24:05.728 sys 0m0.018s 00:24:05.728 08:20:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:05.728 08:20:39 -- common/autotest_common.sh@10 -- # set +x 00:24:05.728 ************************************ 00:24:05.728 END TEST accel_wrong_workload 00:24:05.728 ************************************ 00:24:05.988 08:20:39 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:24:05.988 08:20:39 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:24:05.988 08:20:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:05.988 08:20:39 -- common/autotest_common.sh@10 -- # set +x 00:24:05.988 ************************************ 00:24:05.988 START TEST accel_negative_buffers 00:24:05.988 ************************************ 00:24:05.988 08:20:39 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:24:05.988 08:20:39 -- common/autotest_common.sh@640 -- # local es=0 00:24:05.988 08:20:39 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:24:05.988 08:20:39 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:24:05.988 08:20:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:24:05.988 08:20:39 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:24:05.988 08:20:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:24:05.988 08:20:39 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:24:05.988 08:20:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:24:05.988 08:20:39 -- accel/accel.sh@12 -- # build_accel_config 00:24:05.988 08:20:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:24:05.988 08:20:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:05.988 08:20:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:05.988 08:20:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:24:05.988 08:20:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:24:05.988 08:20:39 -- accel/accel.sh@41 -- # local IFS=, 00:24:05.988 08:20:39 -- accel/accel.sh@42 -- # jq -r . 00:24:05.988 -x option must be non-negative. 00:24:05.988 [2024-04-17 08:20:39.125553] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:24:05.988 accel_perf options: 00:24:05.988 [-h help message] 00:24:05.988 [-q queue depth per core] 00:24:05.988 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:24:05.988 [-T number of threads per core 00:24:05.988 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:24:05.988 [-t time in seconds] 00:24:05.988 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:24:05.988 [ dif_verify, , dif_generate, dif_generate_copy 00:24:05.988 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:24:05.988 [-l for compress/decompress workloads, name of uncompressed input file 00:24:05.988 [-S for crc32c workload, use this seed value (default 0) 00:24:05.988 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:24:05.988 [-f for fill workload, use this BYTE value (default 255) 00:24:05.988 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:24:05.988 [-y verify result if this switch is on] 00:24:05.988 [-a tasks to allocate per core (default: same value as -q)] 00:24:05.988 Can be used to spread operations across a wider range of memory. 00:24:05.988 08:20:39 -- common/autotest_common.sh@643 -- # es=1 00:24:05.988 08:20:39 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:24:05.988 08:20:39 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:24:05.988 08:20:39 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:24:05.988 00:24:05.988 real 0m0.031s 00:24:05.988 user 0m0.019s 00:24:05.988 sys 0m0.012s 00:24:05.988 08:20:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:05.988 08:20:39 -- common/autotest_common.sh@10 -- # set +x 00:24:05.988 ************************************ 00:24:05.988 END TEST accel_negative_buffers 00:24:05.988 ************************************ 00:24:05.988 08:20:39 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:24:05.988 08:20:39 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:24:05.988 08:20:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:05.988 08:20:39 -- common/autotest_common.sh@10 -- # set +x 00:24:05.988 ************************************ 00:24:05.988 START TEST accel_crc32c 00:24:05.988 ************************************ 00:24:05.988 08:20:39 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:24:05.988 08:20:39 -- accel/accel.sh@16 -- # local accel_opc 00:24:05.988 08:20:39 -- accel/accel.sh@17 -- # local accel_module 00:24:05.988 08:20:39 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:24:05.988 08:20:39 -- accel/accel.sh@12 -- # build_accel_config 00:24:05.988 08:20:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:24:05.988 08:20:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:24:05.988 08:20:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:05.988 08:20:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:05.988 08:20:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:24:05.988 08:20:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:24:05.988 08:20:39 -- accel/accel.sh@41 -- # local IFS=, 00:24:05.988 08:20:39 -- accel/accel.sh@42 -- # jq -r . 00:24:05.988 [2024-04-17 08:20:39.218433] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:24:05.988 [2024-04-17 08:20:39.218532] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56375 ] 00:24:06.269 [2024-04-17 08:20:39.357721] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:06.269 [2024-04-17 08:20:39.460858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:07.698 08:20:40 -- accel/accel.sh@18 -- # out=' 00:24:07.698 SPDK Configuration: 00:24:07.698 Core mask: 0x1 00:24:07.698 00:24:07.698 Accel Perf Configuration: 00:24:07.698 Workload Type: crc32c 00:24:07.698 CRC-32C seed: 32 00:24:07.698 Transfer size: 4096 bytes 00:24:07.698 Vector count 1 00:24:07.698 Module: software 00:24:07.698 Queue depth: 32 00:24:07.698 Allocate depth: 32 00:24:07.698 # threads/core: 1 00:24:07.698 Run time: 1 seconds 00:24:07.698 Verify: Yes 00:24:07.698 00:24:07.698 Running for 1 seconds... 00:24:07.698 00:24:07.698 Core,Thread Transfers Bandwidth Failed Miscompares 00:24:07.698 ------------------------------------------------------------------------------------ 00:24:07.698 0,0 497184/s 1942 MiB/s 0 0 00:24:07.698 ==================================================================================== 00:24:07.698 Total 497184/s 1942 MiB/s 0 0' 00:24:07.698 08:20:40 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:24:07.698 08:20:40 -- accel/accel.sh@20 -- # IFS=: 00:24:07.698 08:20:40 -- accel/accel.sh@20 -- # read -r var val 00:24:07.698 08:20:40 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:24:07.698 08:20:40 -- accel/accel.sh@12 -- # build_accel_config 00:24:07.698 08:20:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:24:07.698 08:20:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:07.698 08:20:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:07.698 08:20:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:24:07.698 08:20:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:24:07.698 08:20:40 -- accel/accel.sh@41 -- # local IFS=, 00:24:07.698 08:20:40 -- accel/accel.sh@42 -- # jq -r . 00:24:07.698 [2024-04-17 08:20:40.687981] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:24:07.698 [2024-04-17 08:20:40.688062] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56389 ] 00:24:07.698 [2024-04-17 08:20:40.827398] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:07.698 [2024-04-17 08:20:40.924447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:07.698 08:20:40 -- accel/accel.sh@21 -- # val= 00:24:07.698 08:20:40 -- accel/accel.sh@22 -- # case "$var" in 00:24:07.698 08:20:40 -- accel/accel.sh@20 -- # IFS=: 00:24:07.698 08:20:40 -- accel/accel.sh@20 -- # read -r var val 00:24:07.698 08:20:40 -- accel/accel.sh@21 -- # val= 00:24:07.698 08:20:40 -- accel/accel.sh@22 -- # case "$var" in 00:24:07.698 08:20:40 -- accel/accel.sh@20 -- # IFS=: 00:24:07.698 08:20:40 -- accel/accel.sh@20 -- # read -r var val 00:24:07.698 08:20:40 -- accel/accel.sh@21 -- # val=0x1 00:24:07.698 08:20:40 -- accel/accel.sh@22 -- # case "$var" in 00:24:07.698 08:20:40 -- accel/accel.sh@20 -- # IFS=: 00:24:07.698 08:20:40 -- accel/accel.sh@20 -- # read -r var val 00:24:07.698 08:20:40 -- accel/accel.sh@21 -- # val= 00:24:07.698 08:20:40 -- accel/accel.sh@22 -- # case "$var" in 00:24:07.698 08:20:40 -- accel/accel.sh@20 -- # IFS=: 00:24:07.698 08:20:40 -- accel/accel.sh@20 -- # read -r var val 00:24:07.698 08:20:40 -- accel/accel.sh@21 -- # val= 00:24:07.698 08:20:40 -- accel/accel.sh@22 -- # case "$var" in 00:24:07.698 08:20:40 -- accel/accel.sh@20 -- # IFS=: 00:24:07.698 08:20:40 -- accel/accel.sh@20 -- # read -r var val 00:24:07.698 08:20:40 -- accel/accel.sh@21 -- # val=crc32c 00:24:07.698 08:20:40 -- accel/accel.sh@22 -- # case "$var" in 00:24:07.698 08:20:40 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:24:07.698 08:20:40 -- accel/accel.sh@20 -- # IFS=: 00:24:07.698 08:20:40 -- accel/accel.sh@20 -- # read -r var val 00:24:07.698 08:20:40 -- accel/accel.sh@21 -- # val=32 00:24:07.698 08:20:40 -- accel/accel.sh@22 -- # case "$var" in 00:24:07.698 08:20:40 -- accel/accel.sh@20 -- # IFS=: 00:24:07.698 08:20:40 -- accel/accel.sh@20 -- # read -r var val 00:24:07.698 08:20:40 -- accel/accel.sh@21 -- # val='4096 bytes' 00:24:07.698 08:20:40 -- accel/accel.sh@22 -- # case "$var" in 00:24:07.698 08:20:40 -- accel/accel.sh@20 -- # IFS=: 00:24:07.698 08:20:40 -- accel/accel.sh@20 -- # read -r var val 00:24:07.698 08:20:40 -- accel/accel.sh@21 -- # val= 00:24:07.698 08:20:40 -- accel/accel.sh@22 -- # case "$var" in 00:24:07.698 08:20:40 -- accel/accel.sh@20 -- # IFS=: 00:24:07.698 08:20:40 -- accel/accel.sh@20 -- # read -r var val 00:24:07.698 08:20:40 -- accel/accel.sh@21 -- # val=software 00:24:07.698 08:20:40 -- accel/accel.sh@22 -- # case "$var" in 00:24:07.698 08:20:40 -- accel/accel.sh@23 -- # accel_module=software 00:24:07.698 08:20:40 -- accel/accel.sh@20 -- # IFS=: 00:24:07.698 08:20:40 -- accel/accel.sh@20 -- # read -r var val 00:24:07.698 08:20:40 -- accel/accel.sh@21 -- # val=32 00:24:07.698 08:20:40 -- accel/accel.sh@22 -- # case "$var" in 00:24:07.698 08:20:40 -- accel/accel.sh@20 -- # IFS=: 00:24:07.698 08:20:40 -- accel/accel.sh@20 -- # read -r var val 00:24:07.698 08:20:40 -- accel/accel.sh@21 -- # val=32 00:24:07.698 08:20:40 -- accel/accel.sh@22 -- # case "$var" in 00:24:07.698 08:20:40 -- accel/accel.sh@20 -- # IFS=: 00:24:07.698 08:20:40 -- accel/accel.sh@20 -- # read -r var val 00:24:07.698 08:20:40 -- accel/accel.sh@21 -- # val=1 00:24:07.698 08:20:40 -- accel/accel.sh@22 -- # case "$var" in 00:24:07.698 08:20:40 -- accel/accel.sh@20 -- # IFS=: 00:24:07.698 08:20:40 -- accel/accel.sh@20 -- # read -r var val 00:24:07.698 08:20:40 -- accel/accel.sh@21 -- # val='1 seconds' 00:24:07.698 08:20:40 -- accel/accel.sh@22 -- # case "$var" in 00:24:07.698 08:20:40 -- accel/accel.sh@20 -- # IFS=: 00:24:07.698 08:20:40 -- accel/accel.sh@20 -- # read -r var val 00:24:07.698 08:20:40 -- accel/accel.sh@21 -- # val=Yes 00:24:07.698 08:20:40 -- accel/accel.sh@22 -- # case "$var" in 00:24:07.698 08:20:40 -- accel/accel.sh@20 -- # IFS=: 00:24:07.698 08:20:40 -- accel/accel.sh@20 -- # read -r var val 00:24:07.698 08:20:40 -- accel/accel.sh@21 -- # val= 00:24:07.698 08:20:40 -- accel/accel.sh@22 -- # case "$var" in 00:24:07.698 08:20:40 -- accel/accel.sh@20 -- # IFS=: 00:24:07.698 08:20:40 -- accel/accel.sh@20 -- # read -r var val 00:24:07.698 08:20:40 -- accel/accel.sh@21 -- # val= 00:24:07.698 08:20:40 -- accel/accel.sh@22 -- # case "$var" in 00:24:07.698 08:20:40 -- accel/accel.sh@20 -- # IFS=: 00:24:07.698 08:20:40 -- accel/accel.sh@20 -- # read -r var val 00:24:09.076 08:20:42 -- accel/accel.sh@21 -- # val= 00:24:09.076 08:20:42 -- accel/accel.sh@22 -- # case "$var" in 00:24:09.076 08:20:42 -- accel/accel.sh@20 -- # IFS=: 00:24:09.076 08:20:42 -- accel/accel.sh@20 -- # read -r var val 00:24:09.076 08:20:42 -- accel/accel.sh@21 -- # val= 00:24:09.076 08:20:42 -- accel/accel.sh@22 -- # case "$var" in 00:24:09.076 08:20:42 -- accel/accel.sh@20 -- # IFS=: 00:24:09.076 08:20:42 -- accel/accel.sh@20 -- # read -r var val 00:24:09.076 08:20:42 -- accel/accel.sh@21 -- # val= 00:24:09.076 08:20:42 -- accel/accel.sh@22 -- # case "$var" in 00:24:09.076 08:20:42 -- accel/accel.sh@20 -- # IFS=: 00:24:09.076 08:20:42 -- accel/accel.sh@20 -- # read -r var val 00:24:09.076 08:20:42 -- accel/accel.sh@21 -- # val= 00:24:09.076 08:20:42 -- accel/accel.sh@22 -- # case "$var" in 00:24:09.076 08:20:42 -- accel/accel.sh@20 -- # IFS=: 00:24:09.076 08:20:42 -- accel/accel.sh@20 -- # read -r var val 00:24:09.076 08:20:42 -- accel/accel.sh@21 -- # val= 00:24:09.076 08:20:42 -- accel/accel.sh@22 -- # case "$var" in 00:24:09.076 08:20:42 -- accel/accel.sh@20 -- # IFS=: 00:24:09.076 08:20:42 -- accel/accel.sh@20 -- # read -r var val 00:24:09.076 08:20:42 -- accel/accel.sh@21 -- # val= 00:24:09.076 08:20:42 -- accel/accel.sh@22 -- # case "$var" in 00:24:09.076 08:20:42 -- accel/accel.sh@20 -- # IFS=: 00:24:09.076 08:20:42 -- accel/accel.sh@20 -- # read -r var val 00:24:09.076 08:20:42 -- accel/accel.sh@28 -- # [[ -n software ]] 00:24:09.076 08:20:42 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:24:09.076 08:20:42 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:09.076 00:24:09.076 real 0m2.949s 00:24:09.076 user 0m2.560s 00:24:09.076 sys 0m0.194s 00:24:09.076 08:20:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:09.076 08:20:42 -- common/autotest_common.sh@10 -- # set +x 00:24:09.076 ************************************ 00:24:09.076 END TEST accel_crc32c 00:24:09.076 ************************************ 00:24:09.076 08:20:42 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:24:09.076 08:20:42 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:24:09.076 08:20:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:09.076 08:20:42 -- common/autotest_common.sh@10 -- # set +x 00:24:09.076 ************************************ 00:24:09.076 START TEST accel_crc32c_C2 00:24:09.076 ************************************ 00:24:09.076 08:20:42 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:24:09.076 08:20:42 -- accel/accel.sh@16 -- # local accel_opc 00:24:09.076 08:20:42 -- accel/accel.sh@17 -- # local accel_module 00:24:09.076 08:20:42 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:24:09.076 08:20:42 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:24:09.076 08:20:42 -- accel/accel.sh@12 -- # build_accel_config 00:24:09.076 08:20:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:24:09.076 08:20:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:09.076 08:20:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:09.076 08:20:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:24:09.076 08:20:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:24:09.076 08:20:42 -- accel/accel.sh@41 -- # local IFS=, 00:24:09.076 08:20:42 -- accel/accel.sh@42 -- # jq -r . 00:24:09.076 [2024-04-17 08:20:42.223715] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:24:09.076 [2024-04-17 08:20:42.223793] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56429 ] 00:24:09.076 [2024-04-17 08:20:42.349397] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:09.335 [2024-04-17 08:20:42.445953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:10.712 08:20:43 -- accel/accel.sh@18 -- # out=' 00:24:10.712 SPDK Configuration: 00:24:10.712 Core mask: 0x1 00:24:10.712 00:24:10.712 Accel Perf Configuration: 00:24:10.712 Workload Type: crc32c 00:24:10.712 CRC-32C seed: 0 00:24:10.712 Transfer size: 4096 bytes 00:24:10.712 Vector count 2 00:24:10.712 Module: software 00:24:10.712 Queue depth: 32 00:24:10.712 Allocate depth: 32 00:24:10.712 # threads/core: 1 00:24:10.712 Run time: 1 seconds 00:24:10.712 Verify: Yes 00:24:10.712 00:24:10.712 Running for 1 seconds... 00:24:10.712 00:24:10.712 Core,Thread Transfers Bandwidth Failed Miscompares 00:24:10.712 ------------------------------------------------------------------------------------ 00:24:10.712 0,0 383904/s 2999 MiB/s 0 0 00:24:10.712 ==================================================================================== 00:24:10.712 Total 383904/s 1499 MiB/s 0 0' 00:24:10.712 08:20:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:24:10.712 08:20:43 -- accel/accel.sh@20 -- # IFS=: 00:24:10.712 08:20:43 -- accel/accel.sh@20 -- # read -r var val 00:24:10.712 08:20:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:24:10.712 08:20:43 -- accel/accel.sh@12 -- # build_accel_config 00:24:10.712 08:20:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:24:10.712 08:20:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:10.712 08:20:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:10.712 08:20:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:24:10.712 08:20:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:24:10.712 08:20:43 -- accel/accel.sh@41 -- # local IFS=, 00:24:10.712 08:20:43 -- accel/accel.sh@42 -- # jq -r . 00:24:10.712 [2024-04-17 08:20:43.676924] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:24:10.713 [2024-04-17 08:20:43.677024] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56443 ] 00:24:10.713 [2024-04-17 08:20:43.820456] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:10.713 [2024-04-17 08:20:43.926261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:10.713 08:20:43 -- accel/accel.sh@21 -- # val= 00:24:10.713 08:20:43 -- accel/accel.sh@22 -- # case "$var" in 00:24:10.713 08:20:43 -- accel/accel.sh@20 -- # IFS=: 00:24:10.713 08:20:43 -- accel/accel.sh@20 -- # read -r var val 00:24:10.713 08:20:43 -- accel/accel.sh@21 -- # val= 00:24:10.713 08:20:43 -- accel/accel.sh@22 -- # case "$var" in 00:24:10.713 08:20:43 -- accel/accel.sh@20 -- # IFS=: 00:24:10.713 08:20:43 -- accel/accel.sh@20 -- # read -r var val 00:24:10.713 08:20:43 -- accel/accel.sh@21 -- # val=0x1 00:24:10.713 08:20:43 -- accel/accel.sh@22 -- # case "$var" in 00:24:10.713 08:20:43 -- accel/accel.sh@20 -- # IFS=: 00:24:10.713 08:20:43 -- accel/accel.sh@20 -- # read -r var val 00:24:10.713 08:20:43 -- accel/accel.sh@21 -- # val= 00:24:10.713 08:20:43 -- accel/accel.sh@22 -- # case "$var" in 00:24:10.713 08:20:43 -- accel/accel.sh@20 -- # IFS=: 00:24:10.713 08:20:43 -- accel/accel.sh@20 -- # read -r var val 00:24:10.713 08:20:43 -- accel/accel.sh@21 -- # val= 00:24:10.713 08:20:43 -- accel/accel.sh@22 -- # case "$var" in 00:24:10.713 08:20:43 -- accel/accel.sh@20 -- # IFS=: 00:24:10.713 08:20:43 -- accel/accel.sh@20 -- # read -r var val 00:24:10.713 08:20:43 -- accel/accel.sh@21 -- # val=crc32c 00:24:10.713 08:20:43 -- accel/accel.sh@22 -- # case "$var" in 00:24:10.713 08:20:43 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:24:10.713 08:20:43 -- accel/accel.sh@20 -- # IFS=: 00:24:10.713 08:20:43 -- accel/accel.sh@20 -- # read -r var val 00:24:10.713 08:20:43 -- accel/accel.sh@21 -- # val=0 00:24:10.713 08:20:43 -- accel/accel.sh@22 -- # case "$var" in 00:24:10.713 08:20:43 -- accel/accel.sh@20 -- # IFS=: 00:24:10.713 08:20:43 -- accel/accel.sh@20 -- # read -r var val 00:24:10.713 08:20:43 -- accel/accel.sh@21 -- # val='4096 bytes' 00:24:10.713 08:20:43 -- accel/accel.sh@22 -- # case "$var" in 00:24:10.713 08:20:43 -- accel/accel.sh@20 -- # IFS=: 00:24:10.713 08:20:43 -- accel/accel.sh@20 -- # read -r var val 00:24:10.713 08:20:43 -- accel/accel.sh@21 -- # val= 00:24:10.713 08:20:43 -- accel/accel.sh@22 -- # case "$var" in 00:24:10.713 08:20:43 -- accel/accel.sh@20 -- # IFS=: 00:24:10.713 08:20:43 -- accel/accel.sh@20 -- # read -r var val 00:24:10.713 08:20:43 -- accel/accel.sh@21 -- # val=software 00:24:10.713 08:20:43 -- accel/accel.sh@22 -- # case "$var" in 00:24:10.713 08:20:43 -- accel/accel.sh@23 -- # accel_module=software 00:24:10.713 08:20:43 -- accel/accel.sh@20 -- # IFS=: 00:24:10.713 08:20:43 -- accel/accel.sh@20 -- # read -r var val 00:24:10.713 08:20:43 -- accel/accel.sh@21 -- # val=32 00:24:10.713 08:20:43 -- accel/accel.sh@22 -- # case "$var" in 00:24:10.713 08:20:43 -- accel/accel.sh@20 -- # IFS=: 00:24:10.713 08:20:43 -- accel/accel.sh@20 -- # read -r var val 00:24:10.713 08:20:43 -- accel/accel.sh@21 -- # val=32 00:24:10.713 08:20:43 -- accel/accel.sh@22 -- # case "$var" in 00:24:10.713 08:20:43 -- accel/accel.sh@20 -- # IFS=: 00:24:10.713 08:20:43 -- accel/accel.sh@20 -- # read -r var val 00:24:10.713 08:20:43 -- accel/accel.sh@21 -- # val=1 00:24:10.713 08:20:43 -- accel/accel.sh@22 -- # case "$var" in 00:24:10.713 08:20:43 -- accel/accel.sh@20 -- # IFS=: 00:24:10.713 08:20:43 -- accel/accel.sh@20 -- # read -r var val 00:24:10.713 08:20:43 -- accel/accel.sh@21 -- # val='1 seconds' 00:24:10.713 08:20:43 -- accel/accel.sh@22 -- # case "$var" in 00:24:10.713 08:20:43 -- accel/accel.sh@20 -- # IFS=: 00:24:10.713 08:20:43 -- accel/accel.sh@20 -- # read -r var val 00:24:10.713 08:20:43 -- accel/accel.sh@21 -- # val=Yes 00:24:10.713 08:20:43 -- accel/accel.sh@22 -- # case "$var" in 00:24:10.713 08:20:43 -- accel/accel.sh@20 -- # IFS=: 00:24:10.713 08:20:43 -- accel/accel.sh@20 -- # read -r var val 00:24:10.713 08:20:43 -- accel/accel.sh@21 -- # val= 00:24:10.713 08:20:43 -- accel/accel.sh@22 -- # case "$var" in 00:24:10.713 08:20:43 -- accel/accel.sh@20 -- # IFS=: 00:24:10.713 08:20:43 -- accel/accel.sh@20 -- # read -r var val 00:24:10.713 08:20:43 -- accel/accel.sh@21 -- # val= 00:24:10.713 08:20:43 -- accel/accel.sh@22 -- # case "$var" in 00:24:10.713 08:20:43 -- accel/accel.sh@20 -- # IFS=: 00:24:10.713 08:20:43 -- accel/accel.sh@20 -- # read -r var val 00:24:12.090 08:20:45 -- accel/accel.sh@21 -- # val= 00:24:12.090 08:20:45 -- accel/accel.sh@22 -- # case "$var" in 00:24:12.090 08:20:45 -- accel/accel.sh@20 -- # IFS=: 00:24:12.090 08:20:45 -- accel/accel.sh@20 -- # read -r var val 00:24:12.090 08:20:45 -- accel/accel.sh@21 -- # val= 00:24:12.090 08:20:45 -- accel/accel.sh@22 -- # case "$var" in 00:24:12.090 08:20:45 -- accel/accel.sh@20 -- # IFS=: 00:24:12.090 08:20:45 -- accel/accel.sh@20 -- # read -r var val 00:24:12.090 08:20:45 -- accel/accel.sh@21 -- # val= 00:24:12.090 08:20:45 -- accel/accel.sh@22 -- # case "$var" in 00:24:12.090 08:20:45 -- accel/accel.sh@20 -- # IFS=: 00:24:12.090 08:20:45 -- accel/accel.sh@20 -- # read -r var val 00:24:12.090 08:20:45 -- accel/accel.sh@21 -- # val= 00:24:12.090 08:20:45 -- accel/accel.sh@22 -- # case "$var" in 00:24:12.090 08:20:45 -- accel/accel.sh@20 -- # IFS=: 00:24:12.090 08:20:45 -- accel/accel.sh@20 -- # read -r var val 00:24:12.090 08:20:45 -- accel/accel.sh@21 -- # val= 00:24:12.090 08:20:45 -- accel/accel.sh@22 -- # case "$var" in 00:24:12.090 08:20:45 -- accel/accel.sh@20 -- # IFS=: 00:24:12.090 08:20:45 -- accel/accel.sh@20 -- # read -r var val 00:24:12.090 08:20:45 -- accel/accel.sh@21 -- # val= 00:24:12.090 08:20:45 -- accel/accel.sh@22 -- # case "$var" in 00:24:12.090 08:20:45 -- accel/accel.sh@20 -- # IFS=: 00:24:12.090 08:20:45 -- accel/accel.sh@20 -- # read -r var val 00:24:12.090 08:20:45 -- accel/accel.sh@28 -- # [[ -n software ]] 00:24:12.090 08:20:45 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:24:12.090 08:20:45 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:12.090 00:24:12.090 real 0m2.952s 00:24:12.090 user 0m2.552s 00:24:12.090 sys 0m0.195s 00:24:12.090 08:20:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:12.090 08:20:45 -- common/autotest_common.sh@10 -- # set +x 00:24:12.090 ************************************ 00:24:12.090 END TEST accel_crc32c_C2 00:24:12.090 ************************************ 00:24:12.090 08:20:45 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:24:12.090 08:20:45 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:24:12.090 08:20:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:12.090 08:20:45 -- common/autotest_common.sh@10 -- # set +x 00:24:12.090 ************************************ 00:24:12.090 START TEST accel_copy 00:24:12.090 ************************************ 00:24:12.090 08:20:45 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:24:12.090 08:20:45 -- accel/accel.sh@16 -- # local accel_opc 00:24:12.090 08:20:45 -- accel/accel.sh@17 -- # local accel_module 00:24:12.090 08:20:45 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:24:12.090 08:20:45 -- accel/accel.sh@12 -- # build_accel_config 00:24:12.090 08:20:45 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:24:12.090 08:20:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:24:12.090 08:20:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:12.090 08:20:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:12.090 08:20:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:24:12.090 08:20:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:24:12.090 08:20:45 -- accel/accel.sh@41 -- # local IFS=, 00:24:12.090 08:20:45 -- accel/accel.sh@42 -- # jq -r . 00:24:12.090 [2024-04-17 08:20:45.228262] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:24:12.090 [2024-04-17 08:20:45.228367] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56478 ] 00:24:12.090 [2024-04-17 08:20:45.352919] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:12.350 [2024-04-17 08:20:45.479137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:13.726 08:20:46 -- accel/accel.sh@18 -- # out=' 00:24:13.726 SPDK Configuration: 00:24:13.726 Core mask: 0x1 00:24:13.726 00:24:13.726 Accel Perf Configuration: 00:24:13.726 Workload Type: copy 00:24:13.726 Transfer size: 4096 bytes 00:24:13.726 Vector count 1 00:24:13.726 Module: software 00:24:13.726 Queue depth: 32 00:24:13.726 Allocate depth: 32 00:24:13.726 # threads/core: 1 00:24:13.726 Run time: 1 seconds 00:24:13.726 Verify: Yes 00:24:13.726 00:24:13.726 Running for 1 seconds... 00:24:13.726 00:24:13.726 Core,Thread Transfers Bandwidth Failed Miscompares 00:24:13.726 ------------------------------------------------------------------------------------ 00:24:13.726 0,0 345696/s 1350 MiB/s 0 0 00:24:13.726 ==================================================================================== 00:24:13.726 Total 345696/s 1350 MiB/s 0 0' 00:24:13.726 08:20:46 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:24:13.726 08:20:46 -- accel/accel.sh@20 -- # IFS=: 00:24:13.726 08:20:46 -- accel/accel.sh@20 -- # read -r var val 00:24:13.726 08:20:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:24:13.726 08:20:46 -- accel/accel.sh@12 -- # build_accel_config 00:24:13.726 08:20:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:24:13.726 08:20:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:13.726 08:20:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:13.726 08:20:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:24:13.726 08:20:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:24:13.726 08:20:46 -- accel/accel.sh@41 -- # local IFS=, 00:24:13.726 08:20:46 -- accel/accel.sh@42 -- # jq -r . 00:24:13.726 [2024-04-17 08:20:46.707586] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:24:13.726 [2024-04-17 08:20:46.707693] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56497 ] 00:24:13.726 [2024-04-17 08:20:46.849646] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:13.726 [2024-04-17 08:20:46.954241] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:13.726 08:20:46 -- accel/accel.sh@21 -- # val= 00:24:13.726 08:20:46 -- accel/accel.sh@22 -- # case "$var" in 00:24:13.726 08:20:46 -- accel/accel.sh@20 -- # IFS=: 00:24:13.726 08:20:46 -- accel/accel.sh@20 -- # read -r var val 00:24:13.726 08:20:46 -- accel/accel.sh@21 -- # val= 00:24:13.726 08:20:46 -- accel/accel.sh@22 -- # case "$var" in 00:24:13.726 08:20:46 -- accel/accel.sh@20 -- # IFS=: 00:24:13.726 08:20:46 -- accel/accel.sh@20 -- # read -r var val 00:24:13.726 08:20:46 -- accel/accel.sh@21 -- # val=0x1 00:24:13.726 08:20:46 -- accel/accel.sh@22 -- # case "$var" in 00:24:13.726 08:20:46 -- accel/accel.sh@20 -- # IFS=: 00:24:13.726 08:20:46 -- accel/accel.sh@20 -- # read -r var val 00:24:13.726 08:20:46 -- accel/accel.sh@21 -- # val= 00:24:13.726 08:20:46 -- accel/accel.sh@22 -- # case "$var" in 00:24:13.726 08:20:46 -- accel/accel.sh@20 -- # IFS=: 00:24:13.726 08:20:46 -- accel/accel.sh@20 -- # read -r var val 00:24:13.726 08:20:46 -- accel/accel.sh@21 -- # val= 00:24:13.727 08:20:46 -- accel/accel.sh@22 -- # case "$var" in 00:24:13.727 08:20:46 -- accel/accel.sh@20 -- # IFS=: 00:24:13.727 08:20:46 -- accel/accel.sh@20 -- # read -r var val 00:24:13.727 08:20:46 -- accel/accel.sh@21 -- # val=copy 00:24:13.727 08:20:46 -- accel/accel.sh@22 -- # case "$var" in 00:24:13.727 08:20:46 -- accel/accel.sh@24 -- # accel_opc=copy 00:24:13.727 08:20:46 -- accel/accel.sh@20 -- # IFS=: 00:24:13.727 08:20:46 -- accel/accel.sh@20 -- # read -r var val 00:24:13.727 08:20:46 -- accel/accel.sh@21 -- # val='4096 bytes' 00:24:13.727 08:20:46 -- accel/accel.sh@22 -- # case "$var" in 00:24:13.727 08:20:46 -- accel/accel.sh@20 -- # IFS=: 00:24:13.727 08:20:46 -- accel/accel.sh@20 -- # read -r var val 00:24:13.727 08:20:46 -- accel/accel.sh@21 -- # val= 00:24:13.727 08:20:46 -- accel/accel.sh@22 -- # case "$var" in 00:24:13.727 08:20:46 -- accel/accel.sh@20 -- # IFS=: 00:24:13.727 08:20:46 -- accel/accel.sh@20 -- # read -r var val 00:24:13.727 08:20:46 -- accel/accel.sh@21 -- # val=software 00:24:13.727 08:20:46 -- accel/accel.sh@22 -- # case "$var" in 00:24:13.727 08:20:46 -- accel/accel.sh@23 -- # accel_module=software 00:24:13.727 08:20:46 -- accel/accel.sh@20 -- # IFS=: 00:24:13.727 08:20:47 -- accel/accel.sh@20 -- # read -r var val 00:24:13.727 08:20:47 -- accel/accel.sh@21 -- # val=32 00:24:13.727 08:20:47 -- accel/accel.sh@22 -- # case "$var" in 00:24:13.727 08:20:47 -- accel/accel.sh@20 -- # IFS=: 00:24:13.727 08:20:47 -- accel/accel.sh@20 -- # read -r var val 00:24:13.727 08:20:47 -- accel/accel.sh@21 -- # val=32 00:24:13.727 08:20:47 -- accel/accel.sh@22 -- # case "$var" in 00:24:13.727 08:20:47 -- accel/accel.sh@20 -- # IFS=: 00:24:13.727 08:20:47 -- accel/accel.sh@20 -- # read -r var val 00:24:13.727 08:20:47 -- accel/accel.sh@21 -- # val=1 00:24:13.727 08:20:47 -- accel/accel.sh@22 -- # case "$var" in 00:24:13.727 08:20:47 -- accel/accel.sh@20 -- # IFS=: 00:24:13.727 08:20:47 -- accel/accel.sh@20 -- # read -r var val 00:24:13.727 08:20:47 -- accel/accel.sh@21 -- # val='1 seconds' 00:24:13.727 08:20:47 -- accel/accel.sh@22 -- # case "$var" in 00:24:13.727 08:20:47 -- accel/accel.sh@20 -- # IFS=: 00:24:13.727 08:20:47 -- accel/accel.sh@20 -- # read -r var val 00:24:13.727 08:20:47 -- accel/accel.sh@21 -- # val=Yes 00:24:13.727 08:20:47 -- accel/accel.sh@22 -- # case "$var" in 00:24:13.727 08:20:47 -- accel/accel.sh@20 -- # IFS=: 00:24:13.727 08:20:47 -- accel/accel.sh@20 -- # read -r var val 00:24:13.727 08:20:47 -- accel/accel.sh@21 -- # val= 00:24:13.727 08:20:47 -- accel/accel.sh@22 -- # case "$var" in 00:24:13.727 08:20:47 -- accel/accel.sh@20 -- # IFS=: 00:24:13.727 08:20:47 -- accel/accel.sh@20 -- # read -r var val 00:24:13.727 08:20:47 -- accel/accel.sh@21 -- # val= 00:24:13.727 08:20:47 -- accel/accel.sh@22 -- # case "$var" in 00:24:13.727 08:20:47 -- accel/accel.sh@20 -- # IFS=: 00:24:13.727 08:20:47 -- accel/accel.sh@20 -- # read -r var val 00:24:15.104 08:20:48 -- accel/accel.sh@21 -- # val= 00:24:15.104 08:20:48 -- accel/accel.sh@22 -- # case "$var" in 00:24:15.104 08:20:48 -- accel/accel.sh@20 -- # IFS=: 00:24:15.104 08:20:48 -- accel/accel.sh@20 -- # read -r var val 00:24:15.104 08:20:48 -- accel/accel.sh@21 -- # val= 00:24:15.104 08:20:48 -- accel/accel.sh@22 -- # case "$var" in 00:24:15.104 08:20:48 -- accel/accel.sh@20 -- # IFS=: 00:24:15.105 08:20:48 -- accel/accel.sh@20 -- # read -r var val 00:24:15.105 08:20:48 -- accel/accel.sh@21 -- # val= 00:24:15.105 08:20:48 -- accel/accel.sh@22 -- # case "$var" in 00:24:15.105 08:20:48 -- accel/accel.sh@20 -- # IFS=: 00:24:15.105 08:20:48 -- accel/accel.sh@20 -- # read -r var val 00:24:15.105 08:20:48 -- accel/accel.sh@21 -- # val= 00:24:15.105 08:20:48 -- accel/accel.sh@22 -- # case "$var" in 00:24:15.105 08:20:48 -- accel/accel.sh@20 -- # IFS=: 00:24:15.105 08:20:48 -- accel/accel.sh@20 -- # read -r var val 00:24:15.105 08:20:48 -- accel/accel.sh@21 -- # val= 00:24:15.105 08:20:48 -- accel/accel.sh@22 -- # case "$var" in 00:24:15.105 08:20:48 -- accel/accel.sh@20 -- # IFS=: 00:24:15.105 08:20:48 -- accel/accel.sh@20 -- # read -r var val 00:24:15.105 08:20:48 -- accel/accel.sh@21 -- # val= 00:24:15.105 08:20:48 -- accel/accel.sh@22 -- # case "$var" in 00:24:15.105 08:20:48 -- accel/accel.sh@20 -- # IFS=: 00:24:15.105 08:20:48 -- accel/accel.sh@20 -- # read -r var val 00:24:15.105 08:20:48 -- accel/accel.sh@28 -- # [[ -n software ]] 00:24:15.105 08:20:48 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:24:15.105 08:20:48 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:15.105 00:24:15.105 real 0m2.964s 00:24:15.105 user 0m1.287s 00:24:15.105 sys 0m0.104s 00:24:15.105 08:20:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:15.105 08:20:48 -- common/autotest_common.sh@10 -- # set +x 00:24:15.105 ************************************ 00:24:15.105 END TEST accel_copy 00:24:15.105 ************************************ 00:24:15.105 08:20:48 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:24:15.105 08:20:48 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:24:15.105 08:20:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:15.105 08:20:48 -- common/autotest_common.sh@10 -- # set +x 00:24:15.105 ************************************ 00:24:15.105 START TEST accel_fill 00:24:15.105 ************************************ 00:24:15.105 08:20:48 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:24:15.105 08:20:48 -- accel/accel.sh@16 -- # local accel_opc 00:24:15.105 08:20:48 -- accel/accel.sh@17 -- # local accel_module 00:24:15.105 08:20:48 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:24:15.105 08:20:48 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:24:15.105 08:20:48 -- accel/accel.sh@12 -- # build_accel_config 00:24:15.105 08:20:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:24:15.105 08:20:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:15.105 08:20:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:15.105 08:20:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:24:15.105 08:20:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:24:15.105 08:20:48 -- accel/accel.sh@41 -- # local IFS=, 00:24:15.105 08:20:48 -- accel/accel.sh@42 -- # jq -r . 00:24:15.105 [2024-04-17 08:20:48.230412] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:24:15.105 [2024-04-17 08:20:48.230514] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56532 ] 00:24:15.105 [2024-04-17 08:20:48.366751] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:15.364 [2024-04-17 08:20:48.473578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:16.743 08:20:49 -- accel/accel.sh@18 -- # out=' 00:24:16.743 SPDK Configuration: 00:24:16.743 Core mask: 0x1 00:24:16.743 00:24:16.743 Accel Perf Configuration: 00:24:16.743 Workload Type: fill 00:24:16.743 Fill pattern: 0x80 00:24:16.743 Transfer size: 4096 bytes 00:24:16.743 Vector count 1 00:24:16.743 Module: software 00:24:16.743 Queue depth: 64 00:24:16.743 Allocate depth: 64 00:24:16.743 # threads/core: 1 00:24:16.743 Run time: 1 seconds 00:24:16.743 Verify: Yes 00:24:16.743 00:24:16.743 Running for 1 seconds... 00:24:16.743 00:24:16.743 Core,Thread Transfers Bandwidth Failed Miscompares 00:24:16.743 ------------------------------------------------------------------------------------ 00:24:16.743 0,0 540992/s 2113 MiB/s 0 0 00:24:16.743 ==================================================================================== 00:24:16.743 Total 540992/s 2113 MiB/s 0 0' 00:24:16.743 08:20:49 -- accel/accel.sh@20 -- # IFS=: 00:24:16.743 08:20:49 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:24:16.743 08:20:49 -- accel/accel.sh@20 -- # read -r var val 00:24:16.743 08:20:49 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:24:16.743 08:20:49 -- accel/accel.sh@12 -- # build_accel_config 00:24:16.743 08:20:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:24:16.743 08:20:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:16.743 08:20:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:16.743 08:20:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:24:16.743 08:20:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:24:16.743 08:20:49 -- accel/accel.sh@41 -- # local IFS=, 00:24:16.743 08:20:49 -- accel/accel.sh@42 -- # jq -r . 00:24:16.743 [2024-04-17 08:20:49.709011] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:24:16.743 [2024-04-17 08:20:49.709092] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56551 ] 00:24:16.743 [2024-04-17 08:20:49.847858] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:16.743 [2024-04-17 08:20:49.946225] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:16.743 08:20:49 -- accel/accel.sh@21 -- # val= 00:24:16.743 08:20:49 -- accel/accel.sh@22 -- # case "$var" in 00:24:16.743 08:20:49 -- accel/accel.sh@20 -- # IFS=: 00:24:16.743 08:20:49 -- accel/accel.sh@20 -- # read -r var val 00:24:16.743 08:20:49 -- accel/accel.sh@21 -- # val= 00:24:16.743 08:20:49 -- accel/accel.sh@22 -- # case "$var" in 00:24:16.743 08:20:49 -- accel/accel.sh@20 -- # IFS=: 00:24:16.743 08:20:49 -- accel/accel.sh@20 -- # read -r var val 00:24:16.743 08:20:49 -- accel/accel.sh@21 -- # val=0x1 00:24:16.743 08:20:49 -- accel/accel.sh@22 -- # case "$var" in 00:24:16.743 08:20:49 -- accel/accel.sh@20 -- # IFS=: 00:24:16.743 08:20:49 -- accel/accel.sh@20 -- # read -r var val 00:24:16.743 08:20:49 -- accel/accel.sh@21 -- # val= 00:24:16.743 08:20:49 -- accel/accel.sh@22 -- # case "$var" in 00:24:16.743 08:20:49 -- accel/accel.sh@20 -- # IFS=: 00:24:16.743 08:20:49 -- accel/accel.sh@20 -- # read -r var val 00:24:16.743 08:20:49 -- accel/accel.sh@21 -- # val= 00:24:16.743 08:20:49 -- accel/accel.sh@22 -- # case "$var" in 00:24:16.743 08:20:49 -- accel/accel.sh@20 -- # IFS=: 00:24:16.743 08:20:49 -- accel/accel.sh@20 -- # read -r var val 00:24:16.743 08:20:49 -- accel/accel.sh@21 -- # val=fill 00:24:16.743 08:20:49 -- accel/accel.sh@22 -- # case "$var" in 00:24:16.743 08:20:49 -- accel/accel.sh@24 -- # accel_opc=fill 00:24:16.743 08:20:49 -- accel/accel.sh@20 -- # IFS=: 00:24:16.743 08:20:49 -- accel/accel.sh@20 -- # read -r var val 00:24:16.744 08:20:49 -- accel/accel.sh@21 -- # val=0x80 00:24:16.744 08:20:49 -- accel/accel.sh@22 -- # case "$var" in 00:24:16.744 08:20:49 -- accel/accel.sh@20 -- # IFS=: 00:24:16.744 08:20:49 -- accel/accel.sh@20 -- # read -r var val 00:24:16.744 08:20:49 -- accel/accel.sh@21 -- # val='4096 bytes' 00:24:16.744 08:20:49 -- accel/accel.sh@22 -- # case "$var" in 00:24:16.744 08:20:49 -- accel/accel.sh@20 -- # IFS=: 00:24:16.744 08:20:49 -- accel/accel.sh@20 -- # read -r var val 00:24:16.744 08:20:49 -- accel/accel.sh@21 -- # val= 00:24:16.744 08:20:49 -- accel/accel.sh@22 -- # case "$var" in 00:24:16.744 08:20:49 -- accel/accel.sh@20 -- # IFS=: 00:24:16.744 08:20:49 -- accel/accel.sh@20 -- # read -r var val 00:24:16.744 08:20:49 -- accel/accel.sh@21 -- # val=software 00:24:16.744 08:20:49 -- accel/accel.sh@22 -- # case "$var" in 00:24:16.744 08:20:49 -- accel/accel.sh@23 -- # accel_module=software 00:24:16.744 08:20:49 -- accel/accel.sh@20 -- # IFS=: 00:24:16.744 08:20:49 -- accel/accel.sh@20 -- # read -r var val 00:24:16.744 08:20:50 -- accel/accel.sh@21 -- # val=64 00:24:16.744 08:20:50 -- accel/accel.sh@22 -- # case "$var" in 00:24:16.744 08:20:50 -- accel/accel.sh@20 -- # IFS=: 00:24:16.744 08:20:50 -- accel/accel.sh@20 -- # read -r var val 00:24:16.744 08:20:50 -- accel/accel.sh@21 -- # val=64 00:24:16.744 08:20:50 -- accel/accel.sh@22 -- # case "$var" in 00:24:16.744 08:20:50 -- accel/accel.sh@20 -- # IFS=: 00:24:16.744 08:20:50 -- accel/accel.sh@20 -- # read -r var val 00:24:16.744 08:20:50 -- accel/accel.sh@21 -- # val=1 00:24:16.744 08:20:50 -- accel/accel.sh@22 -- # case "$var" in 00:24:16.744 08:20:50 -- accel/accel.sh@20 -- # IFS=: 00:24:16.744 08:20:50 -- accel/accel.sh@20 -- # read -r var val 00:24:16.744 08:20:50 -- accel/accel.sh@21 -- # val='1 seconds' 00:24:16.744 08:20:50 -- accel/accel.sh@22 -- # case "$var" in 00:24:16.744 08:20:50 -- accel/accel.sh@20 -- # IFS=: 00:24:16.744 08:20:50 -- accel/accel.sh@20 -- # read -r var val 00:24:16.744 08:20:50 -- accel/accel.sh@21 -- # val=Yes 00:24:16.744 08:20:50 -- accel/accel.sh@22 -- # case "$var" in 00:24:16.744 08:20:50 -- accel/accel.sh@20 -- # IFS=: 00:24:16.744 08:20:50 -- accel/accel.sh@20 -- # read -r var val 00:24:16.744 08:20:50 -- accel/accel.sh@21 -- # val= 00:24:16.744 08:20:50 -- accel/accel.sh@22 -- # case "$var" in 00:24:16.744 08:20:50 -- accel/accel.sh@20 -- # IFS=: 00:24:16.744 08:20:50 -- accel/accel.sh@20 -- # read -r var val 00:24:16.744 08:20:50 -- accel/accel.sh@21 -- # val= 00:24:16.744 08:20:50 -- accel/accel.sh@22 -- # case "$var" in 00:24:16.744 08:20:50 -- accel/accel.sh@20 -- # IFS=: 00:24:16.744 08:20:50 -- accel/accel.sh@20 -- # read -r var val 00:24:18.121 08:20:51 -- accel/accel.sh@21 -- # val= 00:24:18.121 08:20:51 -- accel/accel.sh@22 -- # case "$var" in 00:24:18.121 08:20:51 -- accel/accel.sh@20 -- # IFS=: 00:24:18.121 08:20:51 -- accel/accel.sh@20 -- # read -r var val 00:24:18.121 08:20:51 -- accel/accel.sh@21 -- # val= 00:24:18.121 08:20:51 -- accel/accel.sh@22 -- # case "$var" in 00:24:18.121 08:20:51 -- accel/accel.sh@20 -- # IFS=: 00:24:18.121 08:20:51 -- accel/accel.sh@20 -- # read -r var val 00:24:18.121 08:20:51 -- accel/accel.sh@21 -- # val= 00:24:18.121 08:20:51 -- accel/accel.sh@22 -- # case "$var" in 00:24:18.121 08:20:51 -- accel/accel.sh@20 -- # IFS=: 00:24:18.121 08:20:51 -- accel/accel.sh@20 -- # read -r var val 00:24:18.121 08:20:51 -- accel/accel.sh@21 -- # val= 00:24:18.121 08:20:51 -- accel/accel.sh@22 -- # case "$var" in 00:24:18.121 08:20:51 -- accel/accel.sh@20 -- # IFS=: 00:24:18.121 08:20:51 -- accel/accel.sh@20 -- # read -r var val 00:24:18.121 08:20:51 -- accel/accel.sh@21 -- # val= 00:24:18.121 08:20:51 -- accel/accel.sh@22 -- # case "$var" in 00:24:18.121 08:20:51 -- accel/accel.sh@20 -- # IFS=: 00:24:18.121 08:20:51 -- accel/accel.sh@20 -- # read -r var val 00:24:18.121 08:20:51 -- accel/accel.sh@21 -- # val= 00:24:18.121 08:20:51 -- accel/accel.sh@22 -- # case "$var" in 00:24:18.121 08:20:51 -- accel/accel.sh@20 -- # IFS=: 00:24:18.121 08:20:51 -- accel/accel.sh@20 -- # read -r var val 00:24:18.121 08:20:51 -- accel/accel.sh@28 -- # [[ -n software ]] 00:24:18.121 08:20:51 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:24:18.121 08:20:51 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:18.121 00:24:18.121 real 0m2.946s 00:24:18.121 user 0m2.550s 00:24:18.121 sys 0m0.200s 00:24:18.121 08:20:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:18.121 08:20:51 -- common/autotest_common.sh@10 -- # set +x 00:24:18.121 ************************************ 00:24:18.121 END TEST accel_fill 00:24:18.121 ************************************ 00:24:18.121 08:20:51 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:24:18.121 08:20:51 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:24:18.121 08:20:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:18.121 08:20:51 -- common/autotest_common.sh@10 -- # set +x 00:24:18.121 ************************************ 00:24:18.121 START TEST accel_copy_crc32c 00:24:18.121 ************************************ 00:24:18.121 08:20:51 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:24:18.121 08:20:51 -- accel/accel.sh@16 -- # local accel_opc 00:24:18.121 08:20:51 -- accel/accel.sh@17 -- # local accel_module 00:24:18.121 08:20:51 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:24:18.121 08:20:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:24:18.121 08:20:51 -- accel/accel.sh@12 -- # build_accel_config 00:24:18.121 08:20:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:24:18.121 08:20:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:18.121 08:20:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:18.121 08:20:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:24:18.121 08:20:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:24:18.121 08:20:51 -- accel/accel.sh@41 -- # local IFS=, 00:24:18.121 08:20:51 -- accel/accel.sh@42 -- # jq -r . 00:24:18.121 [2024-04-17 08:20:51.240028] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:24:18.121 [2024-04-17 08:20:51.240126] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56583 ] 00:24:18.121 [2024-04-17 08:20:51.364386] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:18.380 [2024-04-17 08:20:51.491353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:19.755 08:20:52 -- accel/accel.sh@18 -- # out=' 00:24:19.755 SPDK Configuration: 00:24:19.755 Core mask: 0x1 00:24:19.755 00:24:19.755 Accel Perf Configuration: 00:24:19.755 Workload Type: copy_crc32c 00:24:19.756 CRC-32C seed: 0 00:24:19.756 Vector size: 4096 bytes 00:24:19.756 Transfer size: 4096 bytes 00:24:19.756 Vector count 1 00:24:19.756 Module: software 00:24:19.756 Queue depth: 32 00:24:19.756 Allocate depth: 32 00:24:19.756 # threads/core: 1 00:24:19.756 Run time: 1 seconds 00:24:19.756 Verify: Yes 00:24:19.756 00:24:19.756 Running for 1 seconds... 00:24:19.756 00:24:19.756 Core,Thread Transfers Bandwidth Failed Miscompares 00:24:19.756 ------------------------------------------------------------------------------------ 00:24:19.756 0,0 274400/s 1071 MiB/s 0 0 00:24:19.756 ==================================================================================== 00:24:19.756 Total 274400/s 1071 MiB/s 0 0' 00:24:19.756 08:20:52 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:24:19.756 08:20:52 -- accel/accel.sh@20 -- # IFS=: 00:24:19.756 08:20:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:24:19.756 08:20:52 -- accel/accel.sh@12 -- # build_accel_config 00:24:19.756 08:20:52 -- accel/accel.sh@20 -- # read -r var val 00:24:19.756 08:20:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:24:19.756 08:20:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:19.756 08:20:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:19.756 08:20:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:24:19.756 08:20:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:24:19.756 08:20:52 -- accel/accel.sh@41 -- # local IFS=, 00:24:19.756 08:20:52 -- accel/accel.sh@42 -- # jq -r . 00:24:19.756 [2024-04-17 08:20:52.720875] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:24:19.756 [2024-04-17 08:20:52.720967] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56605 ] 00:24:19.756 [2024-04-17 08:20:52.860986] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:19.756 [2024-04-17 08:20:52.960111] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:19.756 08:20:53 -- accel/accel.sh@21 -- # val= 00:24:19.756 08:20:53 -- accel/accel.sh@22 -- # case "$var" in 00:24:19.756 08:20:53 -- accel/accel.sh@20 -- # IFS=: 00:24:19.756 08:20:53 -- accel/accel.sh@20 -- # read -r var val 00:24:19.756 08:20:53 -- accel/accel.sh@21 -- # val= 00:24:19.756 08:20:53 -- accel/accel.sh@22 -- # case "$var" in 00:24:19.756 08:20:53 -- accel/accel.sh@20 -- # IFS=: 00:24:19.756 08:20:53 -- accel/accel.sh@20 -- # read -r var val 00:24:19.756 08:20:53 -- accel/accel.sh@21 -- # val=0x1 00:24:19.756 08:20:53 -- accel/accel.sh@22 -- # case "$var" in 00:24:19.756 08:20:53 -- accel/accel.sh@20 -- # IFS=: 00:24:19.756 08:20:53 -- accel/accel.sh@20 -- # read -r var val 00:24:19.756 08:20:53 -- accel/accel.sh@21 -- # val= 00:24:19.756 08:20:53 -- accel/accel.sh@22 -- # case "$var" in 00:24:19.756 08:20:53 -- accel/accel.sh@20 -- # IFS=: 00:24:19.756 08:20:53 -- accel/accel.sh@20 -- # read -r var val 00:24:19.756 08:20:53 -- accel/accel.sh@21 -- # val= 00:24:19.756 08:20:53 -- accel/accel.sh@22 -- # case "$var" in 00:24:19.756 08:20:53 -- accel/accel.sh@20 -- # IFS=: 00:24:19.756 08:20:53 -- accel/accel.sh@20 -- # read -r var val 00:24:19.756 08:20:53 -- accel/accel.sh@21 -- # val=copy_crc32c 00:24:19.756 08:20:53 -- accel/accel.sh@22 -- # case "$var" in 00:24:19.756 08:20:53 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:24:19.756 08:20:53 -- accel/accel.sh@20 -- # IFS=: 00:24:19.756 08:20:53 -- accel/accel.sh@20 -- # read -r var val 00:24:19.756 08:20:53 -- accel/accel.sh@21 -- # val=0 00:24:19.756 08:20:53 -- accel/accel.sh@22 -- # case "$var" in 00:24:19.756 08:20:53 -- accel/accel.sh@20 -- # IFS=: 00:24:19.756 08:20:53 -- accel/accel.sh@20 -- # read -r var val 00:24:19.756 08:20:53 -- accel/accel.sh@21 -- # val='4096 bytes' 00:24:19.756 08:20:53 -- accel/accel.sh@22 -- # case "$var" in 00:24:19.756 08:20:53 -- accel/accel.sh@20 -- # IFS=: 00:24:19.756 08:20:53 -- accel/accel.sh@20 -- # read -r var val 00:24:19.756 08:20:53 -- accel/accel.sh@21 -- # val='4096 bytes' 00:24:19.756 08:20:53 -- accel/accel.sh@22 -- # case "$var" in 00:24:19.756 08:20:53 -- accel/accel.sh@20 -- # IFS=: 00:24:19.756 08:20:53 -- accel/accel.sh@20 -- # read -r var val 00:24:19.756 08:20:53 -- accel/accel.sh@21 -- # val= 00:24:19.756 08:20:53 -- accel/accel.sh@22 -- # case "$var" in 00:24:19.756 08:20:53 -- accel/accel.sh@20 -- # IFS=: 00:24:19.756 08:20:53 -- accel/accel.sh@20 -- # read -r var val 00:24:19.756 08:20:53 -- accel/accel.sh@21 -- # val=software 00:24:19.756 08:20:53 -- accel/accel.sh@22 -- # case "$var" in 00:24:19.756 08:20:53 -- accel/accel.sh@23 -- # accel_module=software 00:24:19.756 08:20:53 -- accel/accel.sh@20 -- # IFS=: 00:24:19.756 08:20:53 -- accel/accel.sh@20 -- # read -r var val 00:24:19.756 08:20:53 -- accel/accel.sh@21 -- # val=32 00:24:19.756 08:20:53 -- accel/accel.sh@22 -- # case "$var" in 00:24:19.756 08:20:53 -- accel/accel.sh@20 -- # IFS=: 00:24:19.756 08:20:53 -- accel/accel.sh@20 -- # read -r var val 00:24:19.756 08:20:53 -- accel/accel.sh@21 -- # val=32 00:24:19.756 08:20:53 -- accel/accel.sh@22 -- # case "$var" in 00:24:19.756 08:20:53 -- accel/accel.sh@20 -- # IFS=: 00:24:19.756 08:20:53 -- accel/accel.sh@20 -- # read -r var val 00:24:19.756 08:20:53 -- accel/accel.sh@21 -- # val=1 00:24:19.756 08:20:53 -- accel/accel.sh@22 -- # case "$var" in 00:24:19.756 08:20:53 -- accel/accel.sh@20 -- # IFS=: 00:24:19.756 08:20:53 -- accel/accel.sh@20 -- # read -r var val 00:24:19.756 08:20:53 -- accel/accel.sh@21 -- # val='1 seconds' 00:24:19.756 08:20:53 -- accel/accel.sh@22 -- # case "$var" in 00:24:19.756 08:20:53 -- accel/accel.sh@20 -- # IFS=: 00:24:19.756 08:20:53 -- accel/accel.sh@20 -- # read -r var val 00:24:19.756 08:20:53 -- accel/accel.sh@21 -- # val=Yes 00:24:19.756 08:20:53 -- accel/accel.sh@22 -- # case "$var" in 00:24:19.756 08:20:53 -- accel/accel.sh@20 -- # IFS=: 00:24:19.756 08:20:53 -- accel/accel.sh@20 -- # read -r var val 00:24:19.756 08:20:53 -- accel/accel.sh@21 -- # val= 00:24:19.756 08:20:53 -- accel/accel.sh@22 -- # case "$var" in 00:24:19.756 08:20:53 -- accel/accel.sh@20 -- # IFS=: 00:24:19.756 08:20:53 -- accel/accel.sh@20 -- # read -r var val 00:24:19.756 08:20:53 -- accel/accel.sh@21 -- # val= 00:24:19.756 08:20:53 -- accel/accel.sh@22 -- # case "$var" in 00:24:19.756 08:20:53 -- accel/accel.sh@20 -- # IFS=: 00:24:19.756 08:20:53 -- accel/accel.sh@20 -- # read -r var val 00:24:21.148 08:20:54 -- accel/accel.sh@21 -- # val= 00:24:21.148 08:20:54 -- accel/accel.sh@22 -- # case "$var" in 00:24:21.148 08:20:54 -- accel/accel.sh@20 -- # IFS=: 00:24:21.148 08:20:54 -- accel/accel.sh@20 -- # read -r var val 00:24:21.148 08:20:54 -- accel/accel.sh@21 -- # val= 00:24:21.148 08:20:54 -- accel/accel.sh@22 -- # case "$var" in 00:24:21.148 08:20:54 -- accel/accel.sh@20 -- # IFS=: 00:24:21.148 08:20:54 -- accel/accel.sh@20 -- # read -r var val 00:24:21.148 08:20:54 -- accel/accel.sh@21 -- # val= 00:24:21.148 08:20:54 -- accel/accel.sh@22 -- # case "$var" in 00:24:21.148 08:20:54 -- accel/accel.sh@20 -- # IFS=: 00:24:21.148 08:20:54 -- accel/accel.sh@20 -- # read -r var val 00:24:21.148 08:20:54 -- accel/accel.sh@21 -- # val= 00:24:21.148 08:20:54 -- accel/accel.sh@22 -- # case "$var" in 00:24:21.148 08:20:54 -- accel/accel.sh@20 -- # IFS=: 00:24:21.148 08:20:54 -- accel/accel.sh@20 -- # read -r var val 00:24:21.148 08:20:54 -- accel/accel.sh@21 -- # val= 00:24:21.148 08:20:54 -- accel/accel.sh@22 -- # case "$var" in 00:24:21.148 08:20:54 -- accel/accel.sh@20 -- # IFS=: 00:24:21.148 08:20:54 -- accel/accel.sh@20 -- # read -r var val 00:24:21.148 08:20:54 -- accel/accel.sh@21 -- # val= 00:24:21.148 08:20:54 -- accel/accel.sh@22 -- # case "$var" in 00:24:21.148 08:20:54 -- accel/accel.sh@20 -- # IFS=: 00:24:21.148 08:20:54 -- accel/accel.sh@20 -- # read -r var val 00:24:21.148 08:20:54 -- accel/accel.sh@28 -- # [[ -n software ]] 00:24:21.148 08:20:54 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:24:21.148 08:20:54 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:21.148 00:24:21.148 real 0m2.962s 00:24:21.148 user 0m2.567s 00:24:21.148 sys 0m0.200s 00:24:21.148 ************************************ 00:24:21.148 END TEST accel_copy_crc32c 00:24:21.148 ************************************ 00:24:21.148 08:20:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:21.148 08:20:54 -- common/autotest_common.sh@10 -- # set +x 00:24:21.148 08:20:54 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:24:21.148 08:20:54 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:24:21.148 08:20:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:21.148 08:20:54 -- common/autotest_common.sh@10 -- # set +x 00:24:21.148 ************************************ 00:24:21.148 START TEST accel_copy_crc32c_C2 00:24:21.148 ************************************ 00:24:21.148 08:20:54 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:24:21.148 08:20:54 -- accel/accel.sh@16 -- # local accel_opc 00:24:21.148 08:20:54 -- accel/accel.sh@17 -- # local accel_module 00:24:21.148 08:20:54 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:24:21.148 08:20:54 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:24:21.148 08:20:54 -- accel/accel.sh@12 -- # build_accel_config 00:24:21.148 08:20:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:24:21.148 08:20:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:21.148 08:20:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:21.148 08:20:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:24:21.148 08:20:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:24:21.148 08:20:54 -- accel/accel.sh@41 -- # local IFS=, 00:24:21.148 08:20:54 -- accel/accel.sh@42 -- # jq -r . 00:24:21.148 [2024-04-17 08:20:54.263266] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:24:21.148 [2024-04-17 08:20:54.263391] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56634 ] 00:24:21.148 [2024-04-17 08:20:54.400789] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:21.414 [2024-04-17 08:20:54.496282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:22.795 08:20:55 -- accel/accel.sh@18 -- # out=' 00:24:22.795 SPDK Configuration: 00:24:22.795 Core mask: 0x1 00:24:22.795 00:24:22.795 Accel Perf Configuration: 00:24:22.795 Workload Type: copy_crc32c 00:24:22.795 CRC-32C seed: 0 00:24:22.795 Vector size: 4096 bytes 00:24:22.795 Transfer size: 8192 bytes 00:24:22.795 Vector count 2 00:24:22.795 Module: software 00:24:22.795 Queue depth: 32 00:24:22.795 Allocate depth: 32 00:24:22.795 # threads/core: 1 00:24:22.795 Run time: 1 seconds 00:24:22.795 Verify: Yes 00:24:22.795 00:24:22.795 Running for 1 seconds... 00:24:22.795 00:24:22.795 Core,Thread Transfers Bandwidth Failed Miscompares 00:24:22.795 ------------------------------------------------------------------------------------ 00:24:22.795 0,0 214240/s 1673 MiB/s 0 0 00:24:22.795 ==================================================================================== 00:24:22.795 Total 214240/s 836 MiB/s 0 0' 00:24:22.795 08:20:55 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:24:22.795 08:20:55 -- accel/accel.sh@20 -- # IFS=: 00:24:22.795 08:20:55 -- accel/accel.sh@20 -- # read -r var val 00:24:22.795 08:20:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:24:22.795 08:20:55 -- accel/accel.sh@12 -- # build_accel_config 00:24:22.795 08:20:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:24:22.795 08:20:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:22.795 08:20:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:22.795 08:20:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:24:22.795 08:20:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:24:22.795 08:20:55 -- accel/accel.sh@41 -- # local IFS=, 00:24:22.795 08:20:55 -- accel/accel.sh@42 -- # jq -r . 00:24:22.795 [2024-04-17 08:20:55.715677] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:24:22.795 [2024-04-17 08:20:55.715743] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56654 ] 00:24:22.795 [2024-04-17 08:20:55.857489] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:22.795 [2024-04-17 08:20:55.955134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:22.795 08:20:55 -- accel/accel.sh@21 -- # val= 00:24:22.795 08:20:55 -- accel/accel.sh@22 -- # case "$var" in 00:24:22.795 08:20:55 -- accel/accel.sh@20 -- # IFS=: 00:24:22.795 08:20:55 -- accel/accel.sh@20 -- # read -r var val 00:24:22.795 08:20:55 -- accel/accel.sh@21 -- # val= 00:24:22.795 08:20:55 -- accel/accel.sh@22 -- # case "$var" in 00:24:22.795 08:20:55 -- accel/accel.sh@20 -- # IFS=: 00:24:22.795 08:20:55 -- accel/accel.sh@20 -- # read -r var val 00:24:22.795 08:20:55 -- accel/accel.sh@21 -- # val=0x1 00:24:22.795 08:20:55 -- accel/accel.sh@22 -- # case "$var" in 00:24:22.795 08:20:56 -- accel/accel.sh@20 -- # IFS=: 00:24:22.795 08:20:56 -- accel/accel.sh@20 -- # read -r var val 00:24:22.795 08:20:56 -- accel/accel.sh@21 -- # val= 00:24:22.795 08:20:56 -- accel/accel.sh@22 -- # case "$var" in 00:24:22.795 08:20:56 -- accel/accel.sh@20 -- # IFS=: 00:24:22.795 08:20:56 -- accel/accel.sh@20 -- # read -r var val 00:24:22.795 08:20:56 -- accel/accel.sh@21 -- # val= 00:24:22.795 08:20:56 -- accel/accel.sh@22 -- # case "$var" in 00:24:22.795 08:20:56 -- accel/accel.sh@20 -- # IFS=: 00:24:22.795 08:20:56 -- accel/accel.sh@20 -- # read -r var val 00:24:22.795 08:20:56 -- accel/accel.sh@21 -- # val=copy_crc32c 00:24:22.795 08:20:56 -- accel/accel.sh@22 -- # case "$var" in 00:24:22.795 08:20:56 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:24:22.795 08:20:56 -- accel/accel.sh@20 -- # IFS=: 00:24:22.795 08:20:56 -- accel/accel.sh@20 -- # read -r var val 00:24:22.795 08:20:56 -- accel/accel.sh@21 -- # val=0 00:24:22.795 08:20:56 -- accel/accel.sh@22 -- # case "$var" in 00:24:22.795 08:20:56 -- accel/accel.sh@20 -- # IFS=: 00:24:22.795 08:20:56 -- accel/accel.sh@20 -- # read -r var val 00:24:22.795 08:20:56 -- accel/accel.sh@21 -- # val='4096 bytes' 00:24:22.795 08:20:56 -- accel/accel.sh@22 -- # case "$var" in 00:24:22.795 08:20:56 -- accel/accel.sh@20 -- # IFS=: 00:24:22.795 08:20:56 -- accel/accel.sh@20 -- # read -r var val 00:24:22.795 08:20:56 -- accel/accel.sh@21 -- # val='8192 bytes' 00:24:22.795 08:20:56 -- accel/accel.sh@22 -- # case "$var" in 00:24:22.795 08:20:56 -- accel/accel.sh@20 -- # IFS=: 00:24:22.795 08:20:56 -- accel/accel.sh@20 -- # read -r var val 00:24:22.795 08:20:56 -- accel/accel.sh@21 -- # val= 00:24:22.796 08:20:56 -- accel/accel.sh@22 -- # case "$var" in 00:24:22.796 08:20:56 -- accel/accel.sh@20 -- # IFS=: 00:24:22.796 08:20:56 -- accel/accel.sh@20 -- # read -r var val 00:24:22.796 08:20:56 -- accel/accel.sh@21 -- # val=software 00:24:22.796 08:20:56 -- accel/accel.sh@22 -- # case "$var" in 00:24:22.796 08:20:56 -- accel/accel.sh@23 -- # accel_module=software 00:24:22.796 08:20:56 -- accel/accel.sh@20 -- # IFS=: 00:24:22.796 08:20:56 -- accel/accel.sh@20 -- # read -r var val 00:24:22.796 08:20:56 -- accel/accel.sh@21 -- # val=32 00:24:22.796 08:20:56 -- accel/accel.sh@22 -- # case "$var" in 00:24:22.796 08:20:56 -- accel/accel.sh@20 -- # IFS=: 00:24:22.796 08:20:56 -- accel/accel.sh@20 -- # read -r var val 00:24:22.796 08:20:56 -- accel/accel.sh@21 -- # val=32 00:24:22.796 08:20:56 -- accel/accel.sh@22 -- # case "$var" in 00:24:22.796 08:20:56 -- accel/accel.sh@20 -- # IFS=: 00:24:22.796 08:20:56 -- accel/accel.sh@20 -- # read -r var val 00:24:22.796 08:20:56 -- accel/accel.sh@21 -- # val=1 00:24:22.796 08:20:56 -- accel/accel.sh@22 -- # case "$var" in 00:24:22.796 08:20:56 -- accel/accel.sh@20 -- # IFS=: 00:24:22.796 08:20:56 -- accel/accel.sh@20 -- # read -r var val 00:24:22.796 08:20:56 -- accel/accel.sh@21 -- # val='1 seconds' 00:24:22.796 08:20:56 -- accel/accel.sh@22 -- # case "$var" in 00:24:22.796 08:20:56 -- accel/accel.sh@20 -- # IFS=: 00:24:22.796 08:20:56 -- accel/accel.sh@20 -- # read -r var val 00:24:22.796 08:20:56 -- accel/accel.sh@21 -- # val=Yes 00:24:22.796 08:20:56 -- accel/accel.sh@22 -- # case "$var" in 00:24:22.796 08:20:56 -- accel/accel.sh@20 -- # IFS=: 00:24:22.796 08:20:56 -- accel/accel.sh@20 -- # read -r var val 00:24:22.796 08:20:56 -- accel/accel.sh@21 -- # val= 00:24:22.796 08:20:56 -- accel/accel.sh@22 -- # case "$var" in 00:24:22.796 08:20:56 -- accel/accel.sh@20 -- # IFS=: 00:24:22.796 08:20:56 -- accel/accel.sh@20 -- # read -r var val 00:24:22.796 08:20:56 -- accel/accel.sh@21 -- # val= 00:24:22.796 08:20:56 -- accel/accel.sh@22 -- # case "$var" in 00:24:22.796 08:20:56 -- accel/accel.sh@20 -- # IFS=: 00:24:22.796 08:20:56 -- accel/accel.sh@20 -- # read -r var val 00:24:24.176 08:20:57 -- accel/accel.sh@21 -- # val= 00:24:24.176 08:20:57 -- accel/accel.sh@22 -- # case "$var" in 00:24:24.176 08:20:57 -- accel/accel.sh@20 -- # IFS=: 00:24:24.176 08:20:57 -- accel/accel.sh@20 -- # read -r var val 00:24:24.176 08:20:57 -- accel/accel.sh@21 -- # val= 00:24:24.176 08:20:57 -- accel/accel.sh@22 -- # case "$var" in 00:24:24.176 08:20:57 -- accel/accel.sh@20 -- # IFS=: 00:24:24.176 08:20:57 -- accel/accel.sh@20 -- # read -r var val 00:24:24.176 08:20:57 -- accel/accel.sh@21 -- # val= 00:24:24.176 08:20:57 -- accel/accel.sh@22 -- # case "$var" in 00:24:24.176 08:20:57 -- accel/accel.sh@20 -- # IFS=: 00:24:24.176 08:20:57 -- accel/accel.sh@20 -- # read -r var val 00:24:24.176 08:20:57 -- accel/accel.sh@21 -- # val= 00:24:24.176 08:20:57 -- accel/accel.sh@22 -- # case "$var" in 00:24:24.176 08:20:57 -- accel/accel.sh@20 -- # IFS=: 00:24:24.176 08:20:57 -- accel/accel.sh@20 -- # read -r var val 00:24:24.176 08:20:57 -- accel/accel.sh@21 -- # val= 00:24:24.176 08:20:57 -- accel/accel.sh@22 -- # case "$var" in 00:24:24.176 08:20:57 -- accel/accel.sh@20 -- # IFS=: 00:24:24.176 08:20:57 -- accel/accel.sh@20 -- # read -r var val 00:24:24.176 08:20:57 -- accel/accel.sh@21 -- # val= 00:24:24.176 08:20:57 -- accel/accel.sh@22 -- # case "$var" in 00:24:24.176 08:20:57 -- accel/accel.sh@20 -- # IFS=: 00:24:24.176 08:20:57 -- accel/accel.sh@20 -- # read -r var val 00:24:24.176 08:20:57 -- accel/accel.sh@28 -- # [[ -n software ]] 00:24:24.176 08:20:57 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:24:24.176 08:20:57 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:24.176 00:24:24.176 real 0m2.938s 00:24:24.176 user 0m2.560s 00:24:24.176 sys 0m0.183s 00:24:24.176 08:20:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:24.176 08:20:57 -- common/autotest_common.sh@10 -- # set +x 00:24:24.176 ************************************ 00:24:24.176 END TEST accel_copy_crc32c_C2 00:24:24.176 ************************************ 00:24:24.176 08:20:57 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:24:24.176 08:20:57 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:24:24.176 08:20:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:24.176 08:20:57 -- common/autotest_common.sh@10 -- # set +x 00:24:24.176 ************************************ 00:24:24.176 START TEST accel_dualcast 00:24:24.176 ************************************ 00:24:24.176 08:20:57 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:24:24.176 08:20:57 -- accel/accel.sh@16 -- # local accel_opc 00:24:24.176 08:20:57 -- accel/accel.sh@17 -- # local accel_module 00:24:24.176 08:20:57 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:24:24.176 08:20:57 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:24:24.176 08:20:57 -- accel/accel.sh@12 -- # build_accel_config 00:24:24.176 08:20:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:24:24.176 08:20:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:24.176 08:20:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:24.176 08:20:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:24:24.176 08:20:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:24:24.176 08:20:57 -- accel/accel.sh@41 -- # local IFS=, 00:24:24.176 08:20:57 -- accel/accel.sh@42 -- # jq -r . 00:24:24.176 [2024-04-17 08:20:57.258706] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:24:24.176 [2024-04-17 08:20:57.258787] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56688 ] 00:24:24.176 [2024-04-17 08:20:57.396549] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:24.176 [2024-04-17 08:20:57.494488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:25.555 08:20:58 -- accel/accel.sh@18 -- # out=' 00:24:25.555 SPDK Configuration: 00:24:25.555 Core mask: 0x1 00:24:25.555 00:24:25.555 Accel Perf Configuration: 00:24:25.555 Workload Type: dualcast 00:24:25.555 Transfer size: 4096 bytes 00:24:25.555 Vector count 1 00:24:25.555 Module: software 00:24:25.555 Queue depth: 32 00:24:25.555 Allocate depth: 32 00:24:25.555 # threads/core: 1 00:24:25.555 Run time: 1 seconds 00:24:25.555 Verify: Yes 00:24:25.555 00:24:25.555 Running for 1 seconds... 00:24:25.555 00:24:25.555 Core,Thread Transfers Bandwidth Failed Miscompares 00:24:25.555 ------------------------------------------------------------------------------------ 00:24:25.555 0,0 469920/s 1835 MiB/s 0 0 00:24:25.555 ==================================================================================== 00:24:25.555 Total 469920/s 1835 MiB/s 0 0' 00:24:25.555 08:20:58 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:24:25.556 08:20:58 -- accel/accel.sh@20 -- # IFS=: 00:24:25.556 08:20:58 -- accel/accel.sh@20 -- # read -r var val 00:24:25.556 08:20:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:24:25.556 08:20:58 -- accel/accel.sh@12 -- # build_accel_config 00:24:25.556 08:20:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:24:25.556 08:20:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:25.556 08:20:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:25.556 08:20:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:24:25.556 08:20:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:24:25.556 08:20:58 -- accel/accel.sh@41 -- # local IFS=, 00:24:25.556 08:20:58 -- accel/accel.sh@42 -- # jq -r . 00:24:25.556 [2024-04-17 08:20:58.728574] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:24:25.556 [2024-04-17 08:20:58.728655] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56708 ] 00:24:25.556 [2024-04-17 08:20:58.868447] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:25.815 [2024-04-17 08:20:58.960502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:25.815 08:20:59 -- accel/accel.sh@21 -- # val= 00:24:25.815 08:20:59 -- accel/accel.sh@22 -- # case "$var" in 00:24:25.815 08:20:59 -- accel/accel.sh@20 -- # IFS=: 00:24:25.815 08:20:59 -- accel/accel.sh@20 -- # read -r var val 00:24:25.815 08:20:59 -- accel/accel.sh@21 -- # val= 00:24:25.815 08:20:59 -- accel/accel.sh@22 -- # case "$var" in 00:24:25.815 08:20:59 -- accel/accel.sh@20 -- # IFS=: 00:24:25.815 08:20:59 -- accel/accel.sh@20 -- # read -r var val 00:24:25.815 08:20:59 -- accel/accel.sh@21 -- # val=0x1 00:24:25.815 08:20:59 -- accel/accel.sh@22 -- # case "$var" in 00:24:25.815 08:20:59 -- accel/accel.sh@20 -- # IFS=: 00:24:25.815 08:20:59 -- accel/accel.sh@20 -- # read -r var val 00:24:25.815 08:20:59 -- accel/accel.sh@21 -- # val= 00:24:25.815 08:20:59 -- accel/accel.sh@22 -- # case "$var" in 00:24:25.815 08:20:59 -- accel/accel.sh@20 -- # IFS=: 00:24:25.815 08:20:59 -- accel/accel.sh@20 -- # read -r var val 00:24:25.815 08:20:59 -- accel/accel.sh@21 -- # val= 00:24:25.815 08:20:59 -- accel/accel.sh@22 -- # case "$var" in 00:24:25.815 08:20:59 -- accel/accel.sh@20 -- # IFS=: 00:24:25.815 08:20:59 -- accel/accel.sh@20 -- # read -r var val 00:24:25.815 08:20:59 -- accel/accel.sh@21 -- # val=dualcast 00:24:25.815 08:20:59 -- accel/accel.sh@22 -- # case "$var" in 00:24:25.815 08:20:59 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:24:25.815 08:20:59 -- accel/accel.sh@20 -- # IFS=: 00:24:25.815 08:20:59 -- accel/accel.sh@20 -- # read -r var val 00:24:25.815 08:20:59 -- accel/accel.sh@21 -- # val='4096 bytes' 00:24:25.815 08:20:59 -- accel/accel.sh@22 -- # case "$var" in 00:24:25.815 08:20:59 -- accel/accel.sh@20 -- # IFS=: 00:24:25.815 08:20:59 -- accel/accel.sh@20 -- # read -r var val 00:24:25.815 08:20:59 -- accel/accel.sh@21 -- # val= 00:24:25.815 08:20:59 -- accel/accel.sh@22 -- # case "$var" in 00:24:25.815 08:20:59 -- accel/accel.sh@20 -- # IFS=: 00:24:25.815 08:20:59 -- accel/accel.sh@20 -- # read -r var val 00:24:25.815 08:20:59 -- accel/accel.sh@21 -- # val=software 00:24:25.815 08:20:59 -- accel/accel.sh@22 -- # case "$var" in 00:24:25.815 08:20:59 -- accel/accel.sh@23 -- # accel_module=software 00:24:25.815 08:20:59 -- accel/accel.sh@20 -- # IFS=: 00:24:25.815 08:20:59 -- accel/accel.sh@20 -- # read -r var val 00:24:25.815 08:20:59 -- accel/accel.sh@21 -- # val=32 00:24:25.815 08:20:59 -- accel/accel.sh@22 -- # case "$var" in 00:24:25.815 08:20:59 -- accel/accel.sh@20 -- # IFS=: 00:24:25.815 08:20:59 -- accel/accel.sh@20 -- # read -r var val 00:24:25.815 08:20:59 -- accel/accel.sh@21 -- # val=32 00:24:25.815 08:20:59 -- accel/accel.sh@22 -- # case "$var" in 00:24:25.815 08:20:59 -- accel/accel.sh@20 -- # IFS=: 00:24:25.815 08:20:59 -- accel/accel.sh@20 -- # read -r var val 00:24:25.815 08:20:59 -- accel/accel.sh@21 -- # val=1 00:24:25.815 08:20:59 -- accel/accel.sh@22 -- # case "$var" in 00:24:25.815 08:20:59 -- accel/accel.sh@20 -- # IFS=: 00:24:25.815 08:20:59 -- accel/accel.sh@20 -- # read -r var val 00:24:25.815 08:20:59 -- accel/accel.sh@21 -- # val='1 seconds' 00:24:25.815 08:20:59 -- accel/accel.sh@22 -- # case "$var" in 00:24:25.815 08:20:59 -- accel/accel.sh@20 -- # IFS=: 00:24:25.815 08:20:59 -- accel/accel.sh@20 -- # read -r var val 00:24:25.815 08:20:59 -- accel/accel.sh@21 -- # val=Yes 00:24:25.815 08:20:59 -- accel/accel.sh@22 -- # case "$var" in 00:24:25.815 08:20:59 -- accel/accel.sh@20 -- # IFS=: 00:24:25.815 08:20:59 -- accel/accel.sh@20 -- # read -r var val 00:24:25.815 08:20:59 -- accel/accel.sh@21 -- # val= 00:24:25.815 08:20:59 -- accel/accel.sh@22 -- # case "$var" in 00:24:25.815 08:20:59 -- accel/accel.sh@20 -- # IFS=: 00:24:25.815 08:20:59 -- accel/accel.sh@20 -- # read -r var val 00:24:25.815 08:20:59 -- accel/accel.sh@21 -- # val= 00:24:25.815 08:20:59 -- accel/accel.sh@22 -- # case "$var" in 00:24:25.815 08:20:59 -- accel/accel.sh@20 -- # IFS=: 00:24:25.815 08:20:59 -- accel/accel.sh@20 -- # read -r var val 00:24:27.196 08:21:00 -- accel/accel.sh@21 -- # val= 00:24:27.196 08:21:00 -- accel/accel.sh@22 -- # case "$var" in 00:24:27.196 08:21:00 -- accel/accel.sh@20 -- # IFS=: 00:24:27.196 08:21:00 -- accel/accel.sh@20 -- # read -r var val 00:24:27.196 08:21:00 -- accel/accel.sh@21 -- # val= 00:24:27.196 08:21:00 -- accel/accel.sh@22 -- # case "$var" in 00:24:27.196 08:21:00 -- accel/accel.sh@20 -- # IFS=: 00:24:27.196 08:21:00 -- accel/accel.sh@20 -- # read -r var val 00:24:27.196 08:21:00 -- accel/accel.sh@21 -- # val= 00:24:27.196 08:21:00 -- accel/accel.sh@22 -- # case "$var" in 00:24:27.196 08:21:00 -- accel/accel.sh@20 -- # IFS=: 00:24:27.196 08:21:00 -- accel/accel.sh@20 -- # read -r var val 00:24:27.196 08:21:00 -- accel/accel.sh@21 -- # val= 00:24:27.196 08:21:00 -- accel/accel.sh@22 -- # case "$var" in 00:24:27.196 08:21:00 -- accel/accel.sh@20 -- # IFS=: 00:24:27.196 08:21:00 -- accel/accel.sh@20 -- # read -r var val 00:24:27.196 08:21:00 -- accel/accel.sh@21 -- # val= 00:24:27.196 08:21:00 -- accel/accel.sh@22 -- # case "$var" in 00:24:27.196 08:21:00 -- accel/accel.sh@20 -- # IFS=: 00:24:27.196 08:21:00 -- accel/accel.sh@20 -- # read -r var val 00:24:27.196 08:21:00 -- accel/accel.sh@21 -- # val= 00:24:27.196 08:21:00 -- accel/accel.sh@22 -- # case "$var" in 00:24:27.196 08:21:00 -- accel/accel.sh@20 -- # IFS=: 00:24:27.196 08:21:00 -- accel/accel.sh@20 -- # read -r var val 00:24:27.196 08:21:00 -- accel/accel.sh@28 -- # [[ -n software ]] 00:24:27.196 08:21:00 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:24:27.196 08:21:00 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:27.196 00:24:27.196 real 0m2.945s 00:24:27.196 user 0m2.572s 00:24:27.196 sys 0m0.179s 00:24:27.196 08:21:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:27.196 08:21:00 -- common/autotest_common.sh@10 -- # set +x 00:24:27.196 ************************************ 00:24:27.196 END TEST accel_dualcast 00:24:27.196 ************************************ 00:24:27.196 08:21:00 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:24:27.196 08:21:00 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:24:27.196 08:21:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:27.196 08:21:00 -- common/autotest_common.sh@10 -- # set +x 00:24:27.196 ************************************ 00:24:27.196 START TEST accel_compare 00:24:27.196 ************************************ 00:24:27.196 08:21:00 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:24:27.196 08:21:00 -- accel/accel.sh@16 -- # local accel_opc 00:24:27.196 08:21:00 -- accel/accel.sh@17 -- # local accel_module 00:24:27.196 08:21:00 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:24:27.196 08:21:00 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:24:27.196 08:21:00 -- accel/accel.sh@12 -- # build_accel_config 00:24:27.196 08:21:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:24:27.196 08:21:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:27.196 08:21:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:27.196 08:21:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:24:27.196 08:21:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:24:27.196 08:21:00 -- accel/accel.sh@41 -- # local IFS=, 00:24:27.196 08:21:00 -- accel/accel.sh@42 -- # jq -r . 00:24:27.196 [2024-04-17 08:21:00.249018] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:24:27.196 [2024-04-17 08:21:00.249138] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56742 ] 00:24:27.196 [2024-04-17 08:21:00.389085] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:27.196 [2024-04-17 08:21:00.483130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:28.602 08:21:01 -- accel/accel.sh@18 -- # out=' 00:24:28.602 SPDK Configuration: 00:24:28.602 Core mask: 0x1 00:24:28.602 00:24:28.602 Accel Perf Configuration: 00:24:28.602 Workload Type: compare 00:24:28.602 Transfer size: 4096 bytes 00:24:28.602 Vector count 1 00:24:28.602 Module: software 00:24:28.602 Queue depth: 32 00:24:28.602 Allocate depth: 32 00:24:28.602 # threads/core: 1 00:24:28.602 Run time: 1 seconds 00:24:28.602 Verify: Yes 00:24:28.602 00:24:28.602 Running for 1 seconds... 00:24:28.602 00:24:28.602 Core,Thread Transfers Bandwidth Failed Miscompares 00:24:28.602 ------------------------------------------------------------------------------------ 00:24:28.602 0,0 546496/s 2134 MiB/s 0 0 00:24:28.602 ==================================================================================== 00:24:28.602 Total 546496/s 2134 MiB/s 0 0' 00:24:28.602 08:21:01 -- accel/accel.sh@20 -- # IFS=: 00:24:28.602 08:21:01 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:24:28.602 08:21:01 -- accel/accel.sh@20 -- # read -r var val 00:24:28.602 08:21:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:24:28.602 08:21:01 -- accel/accel.sh@12 -- # build_accel_config 00:24:28.602 08:21:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:24:28.602 08:21:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:28.602 08:21:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:28.602 08:21:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:24:28.602 08:21:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:24:28.602 08:21:01 -- accel/accel.sh@41 -- # local IFS=, 00:24:28.602 08:21:01 -- accel/accel.sh@42 -- # jq -r . 00:24:28.602 [2024-04-17 08:21:01.725735] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:24:28.602 [2024-04-17 08:21:01.725921] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56762 ] 00:24:28.602 [2024-04-17 08:21:01.866493] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:28.862 [2024-04-17 08:21:01.969079] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:28.862 08:21:02 -- accel/accel.sh@21 -- # val= 00:24:28.862 08:21:02 -- accel/accel.sh@22 -- # case "$var" in 00:24:28.862 08:21:02 -- accel/accel.sh@20 -- # IFS=: 00:24:28.862 08:21:02 -- accel/accel.sh@20 -- # read -r var val 00:24:28.862 08:21:02 -- accel/accel.sh@21 -- # val= 00:24:28.862 08:21:02 -- accel/accel.sh@22 -- # case "$var" in 00:24:28.862 08:21:02 -- accel/accel.sh@20 -- # IFS=: 00:24:28.862 08:21:02 -- accel/accel.sh@20 -- # read -r var val 00:24:28.862 08:21:02 -- accel/accel.sh@21 -- # val=0x1 00:24:28.862 08:21:02 -- accel/accel.sh@22 -- # case "$var" in 00:24:28.862 08:21:02 -- accel/accel.sh@20 -- # IFS=: 00:24:28.862 08:21:02 -- accel/accel.sh@20 -- # read -r var val 00:24:28.862 08:21:02 -- accel/accel.sh@21 -- # val= 00:24:28.862 08:21:02 -- accel/accel.sh@22 -- # case "$var" in 00:24:28.862 08:21:02 -- accel/accel.sh@20 -- # IFS=: 00:24:28.862 08:21:02 -- accel/accel.sh@20 -- # read -r var val 00:24:28.862 08:21:02 -- accel/accel.sh@21 -- # val= 00:24:28.862 08:21:02 -- accel/accel.sh@22 -- # case "$var" in 00:24:28.862 08:21:02 -- accel/accel.sh@20 -- # IFS=: 00:24:28.862 08:21:02 -- accel/accel.sh@20 -- # read -r var val 00:24:28.862 08:21:02 -- accel/accel.sh@21 -- # val=compare 00:24:28.862 08:21:02 -- accel/accel.sh@22 -- # case "$var" in 00:24:28.862 08:21:02 -- accel/accel.sh@24 -- # accel_opc=compare 00:24:28.862 08:21:02 -- accel/accel.sh@20 -- # IFS=: 00:24:28.862 08:21:02 -- accel/accel.sh@20 -- # read -r var val 00:24:28.862 08:21:02 -- accel/accel.sh@21 -- # val='4096 bytes' 00:24:28.862 08:21:02 -- accel/accel.sh@22 -- # case "$var" in 00:24:28.862 08:21:02 -- accel/accel.sh@20 -- # IFS=: 00:24:28.862 08:21:02 -- accel/accel.sh@20 -- # read -r var val 00:24:28.862 08:21:02 -- accel/accel.sh@21 -- # val= 00:24:28.862 08:21:02 -- accel/accel.sh@22 -- # case "$var" in 00:24:28.862 08:21:02 -- accel/accel.sh@20 -- # IFS=: 00:24:28.862 08:21:02 -- accel/accel.sh@20 -- # read -r var val 00:24:28.862 08:21:02 -- accel/accel.sh@21 -- # val=software 00:24:28.862 08:21:02 -- accel/accel.sh@22 -- # case "$var" in 00:24:28.862 08:21:02 -- accel/accel.sh@23 -- # accel_module=software 00:24:28.862 08:21:02 -- accel/accel.sh@20 -- # IFS=: 00:24:28.862 08:21:02 -- accel/accel.sh@20 -- # read -r var val 00:24:28.862 08:21:02 -- accel/accel.sh@21 -- # val=32 00:24:28.862 08:21:02 -- accel/accel.sh@22 -- # case "$var" in 00:24:28.862 08:21:02 -- accel/accel.sh@20 -- # IFS=: 00:24:28.862 08:21:02 -- accel/accel.sh@20 -- # read -r var val 00:24:28.862 08:21:02 -- accel/accel.sh@21 -- # val=32 00:24:28.862 08:21:02 -- accel/accel.sh@22 -- # case "$var" in 00:24:28.862 08:21:02 -- accel/accel.sh@20 -- # IFS=: 00:24:28.862 08:21:02 -- accel/accel.sh@20 -- # read -r var val 00:24:28.862 08:21:02 -- accel/accel.sh@21 -- # val=1 00:24:28.862 08:21:02 -- accel/accel.sh@22 -- # case "$var" in 00:24:28.862 08:21:02 -- accel/accel.sh@20 -- # IFS=: 00:24:28.862 08:21:02 -- accel/accel.sh@20 -- # read -r var val 00:24:28.862 08:21:02 -- accel/accel.sh@21 -- # val='1 seconds' 00:24:28.862 08:21:02 -- accel/accel.sh@22 -- # case "$var" in 00:24:28.862 08:21:02 -- accel/accel.sh@20 -- # IFS=: 00:24:28.862 08:21:02 -- accel/accel.sh@20 -- # read -r var val 00:24:28.862 08:21:02 -- accel/accel.sh@21 -- # val=Yes 00:24:28.862 08:21:02 -- accel/accel.sh@22 -- # case "$var" in 00:24:28.862 08:21:02 -- accel/accel.sh@20 -- # IFS=: 00:24:28.862 08:21:02 -- accel/accel.sh@20 -- # read -r var val 00:24:28.862 08:21:02 -- accel/accel.sh@21 -- # val= 00:24:28.862 08:21:02 -- accel/accel.sh@22 -- # case "$var" in 00:24:28.862 08:21:02 -- accel/accel.sh@20 -- # IFS=: 00:24:28.862 08:21:02 -- accel/accel.sh@20 -- # read -r var val 00:24:28.862 08:21:02 -- accel/accel.sh@21 -- # val= 00:24:28.862 08:21:02 -- accel/accel.sh@22 -- # case "$var" in 00:24:28.862 08:21:02 -- accel/accel.sh@20 -- # IFS=: 00:24:28.862 08:21:02 -- accel/accel.sh@20 -- # read -r var val 00:24:30.240 08:21:03 -- accel/accel.sh@21 -- # val= 00:24:30.240 08:21:03 -- accel/accel.sh@22 -- # case "$var" in 00:24:30.240 08:21:03 -- accel/accel.sh@20 -- # IFS=: 00:24:30.240 08:21:03 -- accel/accel.sh@20 -- # read -r var val 00:24:30.240 08:21:03 -- accel/accel.sh@21 -- # val= 00:24:30.240 08:21:03 -- accel/accel.sh@22 -- # case "$var" in 00:24:30.240 08:21:03 -- accel/accel.sh@20 -- # IFS=: 00:24:30.240 08:21:03 -- accel/accel.sh@20 -- # read -r var val 00:24:30.240 08:21:03 -- accel/accel.sh@21 -- # val= 00:24:30.240 08:21:03 -- accel/accel.sh@22 -- # case "$var" in 00:24:30.240 08:21:03 -- accel/accel.sh@20 -- # IFS=: 00:24:30.240 08:21:03 -- accel/accel.sh@20 -- # read -r var val 00:24:30.240 08:21:03 -- accel/accel.sh@21 -- # val= 00:24:30.240 08:21:03 -- accel/accel.sh@22 -- # case "$var" in 00:24:30.240 08:21:03 -- accel/accel.sh@20 -- # IFS=: 00:24:30.240 08:21:03 -- accel/accel.sh@20 -- # read -r var val 00:24:30.240 08:21:03 -- accel/accel.sh@21 -- # val= 00:24:30.240 08:21:03 -- accel/accel.sh@22 -- # case "$var" in 00:24:30.240 08:21:03 -- accel/accel.sh@20 -- # IFS=: 00:24:30.240 08:21:03 -- accel/accel.sh@20 -- # read -r var val 00:24:30.240 08:21:03 -- accel/accel.sh@21 -- # val= 00:24:30.240 08:21:03 -- accel/accel.sh@22 -- # case "$var" in 00:24:30.240 08:21:03 -- accel/accel.sh@20 -- # IFS=: 00:24:30.240 08:21:03 -- accel/accel.sh@20 -- # read -r var val 00:24:30.240 08:21:03 -- accel/accel.sh@28 -- # [[ -n software ]] 00:24:30.240 08:21:03 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:24:30.241 08:21:03 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:30.241 00:24:30.241 real 0m2.946s 00:24:30.241 user 0m1.283s 00:24:30.241 sys 0m0.094s 00:24:30.241 08:21:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:30.241 08:21:03 -- common/autotest_common.sh@10 -- # set +x 00:24:30.241 ************************************ 00:24:30.241 END TEST accel_compare 00:24:30.241 ************************************ 00:24:30.241 08:21:03 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:24:30.241 08:21:03 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:24:30.241 08:21:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:30.241 08:21:03 -- common/autotest_common.sh@10 -- # set +x 00:24:30.241 ************************************ 00:24:30.241 START TEST accel_xor 00:24:30.241 ************************************ 00:24:30.241 08:21:03 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:24:30.241 08:21:03 -- accel/accel.sh@16 -- # local accel_opc 00:24:30.241 08:21:03 -- accel/accel.sh@17 -- # local accel_module 00:24:30.241 08:21:03 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:24:30.241 08:21:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:24:30.241 08:21:03 -- accel/accel.sh@12 -- # build_accel_config 00:24:30.241 08:21:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:24:30.241 08:21:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:30.241 08:21:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:30.241 08:21:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:24:30.241 08:21:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:24:30.241 08:21:03 -- accel/accel.sh@41 -- # local IFS=, 00:24:30.241 08:21:03 -- accel/accel.sh@42 -- # jq -r . 00:24:30.241 [2024-04-17 08:21:03.274353] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:24:30.241 [2024-04-17 08:21:03.274510] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56796 ] 00:24:30.241 [2024-04-17 08:21:03.415594] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:30.241 [2024-04-17 08:21:03.516301] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:31.622 08:21:04 -- accel/accel.sh@18 -- # out=' 00:24:31.622 SPDK Configuration: 00:24:31.622 Core mask: 0x1 00:24:31.622 00:24:31.622 Accel Perf Configuration: 00:24:31.622 Workload Type: xor 00:24:31.622 Source buffers: 2 00:24:31.622 Transfer size: 4096 bytes 00:24:31.622 Vector count 1 00:24:31.622 Module: software 00:24:31.622 Queue depth: 32 00:24:31.622 Allocate depth: 32 00:24:31.622 # threads/core: 1 00:24:31.622 Run time: 1 seconds 00:24:31.622 Verify: Yes 00:24:31.622 00:24:31.622 Running for 1 seconds... 00:24:31.622 00:24:31.622 Core,Thread Transfers Bandwidth Failed Miscompares 00:24:31.622 ------------------------------------------------------------------------------------ 00:24:31.622 0,0 386112/s 1508 MiB/s 0 0 00:24:31.622 ==================================================================================== 00:24:31.622 Total 386112/s 1508 MiB/s 0 0' 00:24:31.622 08:21:04 -- accel/accel.sh@20 -- # IFS=: 00:24:31.622 08:21:04 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:24:31.622 08:21:04 -- accel/accel.sh@20 -- # read -r var val 00:24:31.622 08:21:04 -- accel/accel.sh@12 -- # build_accel_config 00:24:31.622 08:21:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:24:31.622 08:21:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:24:31.622 08:21:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:31.622 08:21:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:31.622 08:21:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:24:31.622 08:21:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:24:31.622 08:21:04 -- accel/accel.sh@41 -- # local IFS=, 00:24:31.622 08:21:04 -- accel/accel.sh@42 -- # jq -r . 00:24:31.622 [2024-04-17 08:21:04.749147] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:24:31.622 [2024-04-17 08:21:04.749243] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56816 ] 00:24:31.622 [2024-04-17 08:21:04.887717] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:31.882 [2024-04-17 08:21:04.993841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:31.882 08:21:05 -- accel/accel.sh@21 -- # val= 00:24:31.882 08:21:05 -- accel/accel.sh@22 -- # case "$var" in 00:24:31.882 08:21:05 -- accel/accel.sh@20 -- # IFS=: 00:24:31.882 08:21:05 -- accel/accel.sh@20 -- # read -r var val 00:24:31.882 08:21:05 -- accel/accel.sh@21 -- # val= 00:24:31.882 08:21:05 -- accel/accel.sh@22 -- # case "$var" in 00:24:31.882 08:21:05 -- accel/accel.sh@20 -- # IFS=: 00:24:31.882 08:21:05 -- accel/accel.sh@20 -- # read -r var val 00:24:31.882 08:21:05 -- accel/accel.sh@21 -- # val=0x1 00:24:31.882 08:21:05 -- accel/accel.sh@22 -- # case "$var" in 00:24:31.882 08:21:05 -- accel/accel.sh@20 -- # IFS=: 00:24:31.882 08:21:05 -- accel/accel.sh@20 -- # read -r var val 00:24:31.882 08:21:05 -- accel/accel.sh@21 -- # val= 00:24:31.882 08:21:05 -- accel/accel.sh@22 -- # case "$var" in 00:24:31.882 08:21:05 -- accel/accel.sh@20 -- # IFS=: 00:24:31.882 08:21:05 -- accel/accel.sh@20 -- # read -r var val 00:24:31.882 08:21:05 -- accel/accel.sh@21 -- # val= 00:24:31.882 08:21:05 -- accel/accel.sh@22 -- # case "$var" in 00:24:31.882 08:21:05 -- accel/accel.sh@20 -- # IFS=: 00:24:31.882 08:21:05 -- accel/accel.sh@20 -- # read -r var val 00:24:31.882 08:21:05 -- accel/accel.sh@21 -- # val=xor 00:24:31.882 08:21:05 -- accel/accel.sh@22 -- # case "$var" in 00:24:31.882 08:21:05 -- accel/accel.sh@24 -- # accel_opc=xor 00:24:31.882 08:21:05 -- accel/accel.sh@20 -- # IFS=: 00:24:31.882 08:21:05 -- accel/accel.sh@20 -- # read -r var val 00:24:31.882 08:21:05 -- accel/accel.sh@21 -- # val=2 00:24:31.882 08:21:05 -- accel/accel.sh@22 -- # case "$var" in 00:24:31.882 08:21:05 -- accel/accel.sh@20 -- # IFS=: 00:24:31.882 08:21:05 -- accel/accel.sh@20 -- # read -r var val 00:24:31.882 08:21:05 -- accel/accel.sh@21 -- # val='4096 bytes' 00:24:31.882 08:21:05 -- accel/accel.sh@22 -- # case "$var" in 00:24:31.882 08:21:05 -- accel/accel.sh@20 -- # IFS=: 00:24:31.882 08:21:05 -- accel/accel.sh@20 -- # read -r var val 00:24:31.882 08:21:05 -- accel/accel.sh@21 -- # val= 00:24:31.882 08:21:05 -- accel/accel.sh@22 -- # case "$var" in 00:24:31.882 08:21:05 -- accel/accel.sh@20 -- # IFS=: 00:24:31.882 08:21:05 -- accel/accel.sh@20 -- # read -r var val 00:24:31.882 08:21:05 -- accel/accel.sh@21 -- # val=software 00:24:31.882 08:21:05 -- accel/accel.sh@22 -- # case "$var" in 00:24:31.882 08:21:05 -- accel/accel.sh@23 -- # accel_module=software 00:24:31.882 08:21:05 -- accel/accel.sh@20 -- # IFS=: 00:24:31.882 08:21:05 -- accel/accel.sh@20 -- # read -r var val 00:24:31.882 08:21:05 -- accel/accel.sh@21 -- # val=32 00:24:31.882 08:21:05 -- accel/accel.sh@22 -- # case "$var" in 00:24:31.882 08:21:05 -- accel/accel.sh@20 -- # IFS=: 00:24:31.882 08:21:05 -- accel/accel.sh@20 -- # read -r var val 00:24:31.882 08:21:05 -- accel/accel.sh@21 -- # val=32 00:24:31.882 08:21:05 -- accel/accel.sh@22 -- # case "$var" in 00:24:31.882 08:21:05 -- accel/accel.sh@20 -- # IFS=: 00:24:31.882 08:21:05 -- accel/accel.sh@20 -- # read -r var val 00:24:31.882 08:21:05 -- accel/accel.sh@21 -- # val=1 00:24:31.882 08:21:05 -- accel/accel.sh@22 -- # case "$var" in 00:24:31.882 08:21:05 -- accel/accel.sh@20 -- # IFS=: 00:24:31.882 08:21:05 -- accel/accel.sh@20 -- # read -r var val 00:24:31.882 08:21:05 -- accel/accel.sh@21 -- # val='1 seconds' 00:24:31.882 08:21:05 -- accel/accel.sh@22 -- # case "$var" in 00:24:31.882 08:21:05 -- accel/accel.sh@20 -- # IFS=: 00:24:31.882 08:21:05 -- accel/accel.sh@20 -- # read -r var val 00:24:31.882 08:21:05 -- accel/accel.sh@21 -- # val=Yes 00:24:31.882 08:21:05 -- accel/accel.sh@22 -- # case "$var" in 00:24:31.882 08:21:05 -- accel/accel.sh@20 -- # IFS=: 00:24:31.882 08:21:05 -- accel/accel.sh@20 -- # read -r var val 00:24:31.882 08:21:05 -- accel/accel.sh@21 -- # val= 00:24:31.882 08:21:05 -- accel/accel.sh@22 -- # case "$var" in 00:24:31.882 08:21:05 -- accel/accel.sh@20 -- # IFS=: 00:24:31.882 08:21:05 -- accel/accel.sh@20 -- # read -r var val 00:24:31.882 08:21:05 -- accel/accel.sh@21 -- # val= 00:24:31.882 08:21:05 -- accel/accel.sh@22 -- # case "$var" in 00:24:31.882 08:21:05 -- accel/accel.sh@20 -- # IFS=: 00:24:31.882 08:21:05 -- accel/accel.sh@20 -- # read -r var val 00:24:33.264 08:21:06 -- accel/accel.sh@21 -- # val= 00:24:33.264 08:21:06 -- accel/accel.sh@22 -- # case "$var" in 00:24:33.264 08:21:06 -- accel/accel.sh@20 -- # IFS=: 00:24:33.264 08:21:06 -- accel/accel.sh@20 -- # read -r var val 00:24:33.264 08:21:06 -- accel/accel.sh@21 -- # val= 00:24:33.264 08:21:06 -- accel/accel.sh@22 -- # case "$var" in 00:24:33.264 08:21:06 -- accel/accel.sh@20 -- # IFS=: 00:24:33.264 08:21:06 -- accel/accel.sh@20 -- # read -r var val 00:24:33.264 08:21:06 -- accel/accel.sh@21 -- # val= 00:24:33.264 08:21:06 -- accel/accel.sh@22 -- # case "$var" in 00:24:33.264 08:21:06 -- accel/accel.sh@20 -- # IFS=: 00:24:33.264 08:21:06 -- accel/accel.sh@20 -- # read -r var val 00:24:33.264 08:21:06 -- accel/accel.sh@21 -- # val= 00:24:33.264 08:21:06 -- accel/accel.sh@22 -- # case "$var" in 00:24:33.264 08:21:06 -- accel/accel.sh@20 -- # IFS=: 00:24:33.264 08:21:06 -- accel/accel.sh@20 -- # read -r var val 00:24:33.264 08:21:06 -- accel/accel.sh@21 -- # val= 00:24:33.264 08:21:06 -- accel/accel.sh@22 -- # case "$var" in 00:24:33.264 08:21:06 -- accel/accel.sh@20 -- # IFS=: 00:24:33.264 08:21:06 -- accel/accel.sh@20 -- # read -r var val 00:24:33.264 08:21:06 -- accel/accel.sh@21 -- # val= 00:24:33.264 08:21:06 -- accel/accel.sh@22 -- # case "$var" in 00:24:33.264 08:21:06 -- accel/accel.sh@20 -- # IFS=: 00:24:33.264 08:21:06 -- accel/accel.sh@20 -- # read -r var val 00:24:33.264 08:21:06 -- accel/accel.sh@28 -- # [[ -n software ]] 00:24:33.264 08:21:06 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:24:33.264 08:21:06 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:33.264 00:24:33.264 real 0m2.969s 00:24:33.264 user 0m2.574s 00:24:33.264 sys 0m0.194s 00:24:33.264 08:21:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:33.264 08:21:06 -- common/autotest_common.sh@10 -- # set +x 00:24:33.264 ************************************ 00:24:33.264 END TEST accel_xor 00:24:33.264 ************************************ 00:24:33.264 08:21:06 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:24:33.264 08:21:06 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:24:33.264 08:21:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:33.264 08:21:06 -- common/autotest_common.sh@10 -- # set +x 00:24:33.264 ************************************ 00:24:33.264 START TEST accel_xor 00:24:33.264 ************************************ 00:24:33.264 08:21:06 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:24:33.264 08:21:06 -- accel/accel.sh@16 -- # local accel_opc 00:24:33.264 08:21:06 -- accel/accel.sh@17 -- # local accel_module 00:24:33.264 08:21:06 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:24:33.264 08:21:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:24:33.264 08:21:06 -- accel/accel.sh@12 -- # build_accel_config 00:24:33.264 08:21:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:24:33.264 08:21:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:33.264 08:21:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:33.264 08:21:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:24:33.264 08:21:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:24:33.264 08:21:06 -- accel/accel.sh@41 -- # local IFS=, 00:24:33.264 08:21:06 -- accel/accel.sh@42 -- # jq -r . 00:24:33.264 [2024-04-17 08:21:06.298127] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:24:33.264 [2024-04-17 08:21:06.298299] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56845 ] 00:24:33.264 [2024-04-17 08:21:06.437664] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:33.264 [2024-04-17 08:21:06.537961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:34.644 08:21:07 -- accel/accel.sh@18 -- # out=' 00:24:34.644 SPDK Configuration: 00:24:34.644 Core mask: 0x1 00:24:34.644 00:24:34.644 Accel Perf Configuration: 00:24:34.644 Workload Type: xor 00:24:34.644 Source buffers: 3 00:24:34.644 Transfer size: 4096 bytes 00:24:34.644 Vector count 1 00:24:34.644 Module: software 00:24:34.644 Queue depth: 32 00:24:34.644 Allocate depth: 32 00:24:34.644 # threads/core: 1 00:24:34.644 Run time: 1 seconds 00:24:34.644 Verify: Yes 00:24:34.644 00:24:34.644 Running for 1 seconds... 00:24:34.644 00:24:34.644 Core,Thread Transfers Bandwidth Failed Miscompares 00:24:34.644 ------------------------------------------------------------------------------------ 00:24:34.644 0,0 375648/s 1467 MiB/s 0 0 00:24:34.644 ==================================================================================== 00:24:34.644 Total 375648/s 1467 MiB/s 0 0' 00:24:34.644 08:21:07 -- accel/accel.sh@20 -- # IFS=: 00:24:34.644 08:21:07 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:24:34.644 08:21:07 -- accel/accel.sh@20 -- # read -r var val 00:24:34.644 08:21:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:24:34.644 08:21:07 -- accel/accel.sh@12 -- # build_accel_config 00:24:34.644 08:21:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:24:34.644 08:21:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:34.644 08:21:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:34.644 08:21:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:24:34.644 08:21:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:24:34.644 08:21:07 -- accel/accel.sh@41 -- # local IFS=, 00:24:34.644 08:21:07 -- accel/accel.sh@42 -- # jq -r . 00:24:34.644 [2024-04-17 08:21:07.776277] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:24:34.644 [2024-04-17 08:21:07.776400] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56870 ] 00:24:34.644 [2024-04-17 08:21:07.916037] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:34.903 [2024-04-17 08:21:08.017443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:34.903 08:21:08 -- accel/accel.sh@21 -- # val= 00:24:34.903 08:21:08 -- accel/accel.sh@22 -- # case "$var" in 00:24:34.903 08:21:08 -- accel/accel.sh@20 -- # IFS=: 00:24:34.903 08:21:08 -- accel/accel.sh@20 -- # read -r var val 00:24:34.903 08:21:08 -- accel/accel.sh@21 -- # val= 00:24:34.903 08:21:08 -- accel/accel.sh@22 -- # case "$var" in 00:24:34.903 08:21:08 -- accel/accel.sh@20 -- # IFS=: 00:24:34.903 08:21:08 -- accel/accel.sh@20 -- # read -r var val 00:24:34.903 08:21:08 -- accel/accel.sh@21 -- # val=0x1 00:24:34.903 08:21:08 -- accel/accel.sh@22 -- # case "$var" in 00:24:34.903 08:21:08 -- accel/accel.sh@20 -- # IFS=: 00:24:34.903 08:21:08 -- accel/accel.sh@20 -- # read -r var val 00:24:34.903 08:21:08 -- accel/accel.sh@21 -- # val= 00:24:34.903 08:21:08 -- accel/accel.sh@22 -- # case "$var" in 00:24:34.903 08:21:08 -- accel/accel.sh@20 -- # IFS=: 00:24:34.903 08:21:08 -- accel/accel.sh@20 -- # read -r var val 00:24:34.903 08:21:08 -- accel/accel.sh@21 -- # val= 00:24:34.903 08:21:08 -- accel/accel.sh@22 -- # case "$var" in 00:24:34.903 08:21:08 -- accel/accel.sh@20 -- # IFS=: 00:24:34.904 08:21:08 -- accel/accel.sh@20 -- # read -r var val 00:24:34.904 08:21:08 -- accel/accel.sh@21 -- # val=xor 00:24:34.904 08:21:08 -- accel/accel.sh@22 -- # case "$var" in 00:24:34.904 08:21:08 -- accel/accel.sh@24 -- # accel_opc=xor 00:24:34.904 08:21:08 -- accel/accel.sh@20 -- # IFS=: 00:24:34.904 08:21:08 -- accel/accel.sh@20 -- # read -r var val 00:24:34.904 08:21:08 -- accel/accel.sh@21 -- # val=3 00:24:34.904 08:21:08 -- accel/accel.sh@22 -- # case "$var" in 00:24:34.904 08:21:08 -- accel/accel.sh@20 -- # IFS=: 00:24:34.904 08:21:08 -- accel/accel.sh@20 -- # read -r var val 00:24:34.904 08:21:08 -- accel/accel.sh@21 -- # val='4096 bytes' 00:24:34.904 08:21:08 -- accel/accel.sh@22 -- # case "$var" in 00:24:34.904 08:21:08 -- accel/accel.sh@20 -- # IFS=: 00:24:34.904 08:21:08 -- accel/accel.sh@20 -- # read -r var val 00:24:34.904 08:21:08 -- accel/accel.sh@21 -- # val= 00:24:34.904 08:21:08 -- accel/accel.sh@22 -- # case "$var" in 00:24:34.904 08:21:08 -- accel/accel.sh@20 -- # IFS=: 00:24:34.904 08:21:08 -- accel/accel.sh@20 -- # read -r var val 00:24:34.904 08:21:08 -- accel/accel.sh@21 -- # val=software 00:24:34.904 08:21:08 -- accel/accel.sh@22 -- # case "$var" in 00:24:34.904 08:21:08 -- accel/accel.sh@23 -- # accel_module=software 00:24:34.904 08:21:08 -- accel/accel.sh@20 -- # IFS=: 00:24:34.904 08:21:08 -- accel/accel.sh@20 -- # read -r var val 00:24:34.904 08:21:08 -- accel/accel.sh@21 -- # val=32 00:24:34.904 08:21:08 -- accel/accel.sh@22 -- # case "$var" in 00:24:34.904 08:21:08 -- accel/accel.sh@20 -- # IFS=: 00:24:34.904 08:21:08 -- accel/accel.sh@20 -- # read -r var val 00:24:34.904 08:21:08 -- accel/accel.sh@21 -- # val=32 00:24:34.904 08:21:08 -- accel/accel.sh@22 -- # case "$var" in 00:24:34.904 08:21:08 -- accel/accel.sh@20 -- # IFS=: 00:24:34.904 08:21:08 -- accel/accel.sh@20 -- # read -r var val 00:24:34.904 08:21:08 -- accel/accel.sh@21 -- # val=1 00:24:34.904 08:21:08 -- accel/accel.sh@22 -- # case "$var" in 00:24:34.904 08:21:08 -- accel/accel.sh@20 -- # IFS=: 00:24:34.904 08:21:08 -- accel/accel.sh@20 -- # read -r var val 00:24:34.904 08:21:08 -- accel/accel.sh@21 -- # val='1 seconds' 00:24:34.904 08:21:08 -- accel/accel.sh@22 -- # case "$var" in 00:24:34.904 08:21:08 -- accel/accel.sh@20 -- # IFS=: 00:24:34.904 08:21:08 -- accel/accel.sh@20 -- # read -r var val 00:24:34.904 08:21:08 -- accel/accel.sh@21 -- # val=Yes 00:24:34.904 08:21:08 -- accel/accel.sh@22 -- # case "$var" in 00:24:34.904 08:21:08 -- accel/accel.sh@20 -- # IFS=: 00:24:34.904 08:21:08 -- accel/accel.sh@20 -- # read -r var val 00:24:34.904 08:21:08 -- accel/accel.sh@21 -- # val= 00:24:34.904 08:21:08 -- accel/accel.sh@22 -- # case "$var" in 00:24:34.904 08:21:08 -- accel/accel.sh@20 -- # IFS=: 00:24:34.904 08:21:08 -- accel/accel.sh@20 -- # read -r var val 00:24:34.904 08:21:08 -- accel/accel.sh@21 -- # val= 00:24:34.904 08:21:08 -- accel/accel.sh@22 -- # case "$var" in 00:24:34.904 08:21:08 -- accel/accel.sh@20 -- # IFS=: 00:24:34.904 08:21:08 -- accel/accel.sh@20 -- # read -r var val 00:24:36.284 08:21:09 -- accel/accel.sh@21 -- # val= 00:24:36.284 08:21:09 -- accel/accel.sh@22 -- # case "$var" in 00:24:36.284 08:21:09 -- accel/accel.sh@20 -- # IFS=: 00:24:36.284 08:21:09 -- accel/accel.sh@20 -- # read -r var val 00:24:36.284 08:21:09 -- accel/accel.sh@21 -- # val= 00:24:36.284 08:21:09 -- accel/accel.sh@22 -- # case "$var" in 00:24:36.284 08:21:09 -- accel/accel.sh@20 -- # IFS=: 00:24:36.284 08:21:09 -- accel/accel.sh@20 -- # read -r var val 00:24:36.284 08:21:09 -- accel/accel.sh@21 -- # val= 00:24:36.284 08:21:09 -- accel/accel.sh@22 -- # case "$var" in 00:24:36.284 08:21:09 -- accel/accel.sh@20 -- # IFS=: 00:24:36.284 08:21:09 -- accel/accel.sh@20 -- # read -r var val 00:24:36.284 08:21:09 -- accel/accel.sh@21 -- # val= 00:24:36.284 08:21:09 -- accel/accel.sh@22 -- # case "$var" in 00:24:36.284 08:21:09 -- accel/accel.sh@20 -- # IFS=: 00:24:36.284 08:21:09 -- accel/accel.sh@20 -- # read -r var val 00:24:36.284 08:21:09 -- accel/accel.sh@21 -- # val= 00:24:36.284 08:21:09 -- accel/accel.sh@22 -- # case "$var" in 00:24:36.284 08:21:09 -- accel/accel.sh@20 -- # IFS=: 00:24:36.284 08:21:09 -- accel/accel.sh@20 -- # read -r var val 00:24:36.284 08:21:09 -- accel/accel.sh@21 -- # val= 00:24:36.284 08:21:09 -- accel/accel.sh@22 -- # case "$var" in 00:24:36.284 08:21:09 -- accel/accel.sh@20 -- # IFS=: 00:24:36.284 08:21:09 -- accel/accel.sh@20 -- # read -r var val 00:24:36.284 08:21:09 -- accel/accel.sh@28 -- # [[ -n software ]] 00:24:36.284 08:21:09 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:24:36.284 08:21:09 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:36.284 00:24:36.284 real 0m2.964s 00:24:36.284 user 0m2.569s 00:24:36.284 sys 0m0.199s 00:24:36.284 08:21:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:36.284 08:21:09 -- common/autotest_common.sh@10 -- # set +x 00:24:36.284 ************************************ 00:24:36.284 END TEST accel_xor 00:24:36.284 ************************************ 00:24:36.284 08:21:09 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:24:36.284 08:21:09 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:24:36.284 08:21:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:36.284 08:21:09 -- common/autotest_common.sh@10 -- # set +x 00:24:36.284 ************************************ 00:24:36.284 START TEST accel_dif_verify 00:24:36.284 ************************************ 00:24:36.284 08:21:09 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:24:36.284 08:21:09 -- accel/accel.sh@16 -- # local accel_opc 00:24:36.284 08:21:09 -- accel/accel.sh@17 -- # local accel_module 00:24:36.284 08:21:09 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:24:36.284 08:21:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:24:36.284 08:21:09 -- accel/accel.sh@12 -- # build_accel_config 00:24:36.284 08:21:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:24:36.284 08:21:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:36.284 08:21:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:36.284 08:21:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:24:36.284 08:21:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:24:36.284 08:21:09 -- accel/accel.sh@41 -- # local IFS=, 00:24:36.284 08:21:09 -- accel/accel.sh@42 -- # jq -r . 00:24:36.284 [2024-04-17 08:21:09.306213] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:24:36.284 [2024-04-17 08:21:09.306335] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56899 ] 00:24:36.284 [2024-04-17 08:21:09.444478] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:36.284 [2024-04-17 08:21:09.545581] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:37.660 08:21:10 -- accel/accel.sh@18 -- # out=' 00:24:37.660 SPDK Configuration: 00:24:37.660 Core mask: 0x1 00:24:37.660 00:24:37.660 Accel Perf Configuration: 00:24:37.660 Workload Type: dif_verify 00:24:37.660 Vector size: 4096 bytes 00:24:37.660 Transfer size: 4096 bytes 00:24:37.660 Block size: 512 bytes 00:24:37.660 Metadata size: 8 bytes 00:24:37.660 Vector count 1 00:24:37.660 Module: software 00:24:37.660 Queue depth: 32 00:24:37.660 Allocate depth: 32 00:24:37.660 # threads/core: 1 00:24:37.660 Run time: 1 seconds 00:24:37.660 Verify: No 00:24:37.660 00:24:37.660 Running for 1 seconds... 00:24:37.660 00:24:37.660 Core,Thread Transfers Bandwidth Failed Miscompares 00:24:37.660 ------------------------------------------------------------------------------------ 00:24:37.660 0,0 118432/s 469 MiB/s 0 0 00:24:37.660 ==================================================================================== 00:24:37.660 Total 118432/s 462 MiB/s 0 0' 00:24:37.660 08:21:10 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:24:37.660 08:21:10 -- accel/accel.sh@20 -- # IFS=: 00:24:37.660 08:21:10 -- accel/accel.sh@20 -- # read -r var val 00:24:37.660 08:21:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:24:37.660 08:21:10 -- accel/accel.sh@12 -- # build_accel_config 00:24:37.660 08:21:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:24:37.660 08:21:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:37.660 08:21:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:37.660 08:21:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:24:37.660 08:21:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:24:37.660 08:21:10 -- accel/accel.sh@41 -- # local IFS=, 00:24:37.660 08:21:10 -- accel/accel.sh@42 -- # jq -r . 00:24:37.660 [2024-04-17 08:21:10.769189] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:24:37.660 [2024-04-17 08:21:10.769264] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56924 ] 00:24:37.660 [2024-04-17 08:21:10.908737] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:37.922 [2024-04-17 08:21:11.001818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:37.922 08:21:11 -- accel/accel.sh@21 -- # val= 00:24:37.922 08:21:11 -- accel/accel.sh@22 -- # case "$var" in 00:24:37.922 08:21:11 -- accel/accel.sh@20 -- # IFS=: 00:24:37.922 08:21:11 -- accel/accel.sh@20 -- # read -r var val 00:24:37.922 08:21:11 -- accel/accel.sh@21 -- # val= 00:24:37.922 08:21:11 -- accel/accel.sh@22 -- # case "$var" in 00:24:37.922 08:21:11 -- accel/accel.sh@20 -- # IFS=: 00:24:37.922 08:21:11 -- accel/accel.sh@20 -- # read -r var val 00:24:37.922 08:21:11 -- accel/accel.sh@21 -- # val=0x1 00:24:37.922 08:21:11 -- accel/accel.sh@22 -- # case "$var" in 00:24:37.922 08:21:11 -- accel/accel.sh@20 -- # IFS=: 00:24:37.922 08:21:11 -- accel/accel.sh@20 -- # read -r var val 00:24:37.922 08:21:11 -- accel/accel.sh@21 -- # val= 00:24:37.922 08:21:11 -- accel/accel.sh@22 -- # case "$var" in 00:24:37.922 08:21:11 -- accel/accel.sh@20 -- # IFS=: 00:24:37.922 08:21:11 -- accel/accel.sh@20 -- # read -r var val 00:24:37.922 08:21:11 -- accel/accel.sh@21 -- # val= 00:24:37.922 08:21:11 -- accel/accel.sh@22 -- # case "$var" in 00:24:37.922 08:21:11 -- accel/accel.sh@20 -- # IFS=: 00:24:37.922 08:21:11 -- accel/accel.sh@20 -- # read -r var val 00:24:37.922 08:21:11 -- accel/accel.sh@21 -- # val=dif_verify 00:24:37.922 08:21:11 -- accel/accel.sh@22 -- # case "$var" in 00:24:37.922 08:21:11 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:24:37.922 08:21:11 -- accel/accel.sh@20 -- # IFS=: 00:24:37.922 08:21:11 -- accel/accel.sh@20 -- # read -r var val 00:24:37.922 08:21:11 -- accel/accel.sh@21 -- # val='4096 bytes' 00:24:37.922 08:21:11 -- accel/accel.sh@22 -- # case "$var" in 00:24:37.922 08:21:11 -- accel/accel.sh@20 -- # IFS=: 00:24:37.922 08:21:11 -- accel/accel.sh@20 -- # read -r var val 00:24:37.922 08:21:11 -- accel/accel.sh@21 -- # val='4096 bytes' 00:24:37.922 08:21:11 -- accel/accel.sh@22 -- # case "$var" in 00:24:37.922 08:21:11 -- accel/accel.sh@20 -- # IFS=: 00:24:37.922 08:21:11 -- accel/accel.sh@20 -- # read -r var val 00:24:37.922 08:21:11 -- accel/accel.sh@21 -- # val='512 bytes' 00:24:37.922 08:21:11 -- accel/accel.sh@22 -- # case "$var" in 00:24:37.922 08:21:11 -- accel/accel.sh@20 -- # IFS=: 00:24:37.922 08:21:11 -- accel/accel.sh@20 -- # read -r var val 00:24:37.922 08:21:11 -- accel/accel.sh@21 -- # val='8 bytes' 00:24:37.922 08:21:11 -- accel/accel.sh@22 -- # case "$var" in 00:24:37.922 08:21:11 -- accel/accel.sh@20 -- # IFS=: 00:24:37.922 08:21:11 -- accel/accel.sh@20 -- # read -r var val 00:24:37.922 08:21:11 -- accel/accel.sh@21 -- # val= 00:24:37.922 08:21:11 -- accel/accel.sh@22 -- # case "$var" in 00:24:37.922 08:21:11 -- accel/accel.sh@20 -- # IFS=: 00:24:37.922 08:21:11 -- accel/accel.sh@20 -- # read -r var val 00:24:37.922 08:21:11 -- accel/accel.sh@21 -- # val=software 00:24:37.922 08:21:11 -- accel/accel.sh@22 -- # case "$var" in 00:24:37.922 08:21:11 -- accel/accel.sh@23 -- # accel_module=software 00:24:37.922 08:21:11 -- accel/accel.sh@20 -- # IFS=: 00:24:37.922 08:21:11 -- accel/accel.sh@20 -- # read -r var val 00:24:37.922 08:21:11 -- accel/accel.sh@21 -- # val=32 00:24:37.922 08:21:11 -- accel/accel.sh@22 -- # case "$var" in 00:24:37.922 08:21:11 -- accel/accel.sh@20 -- # IFS=: 00:24:37.922 08:21:11 -- accel/accel.sh@20 -- # read -r var val 00:24:37.922 08:21:11 -- accel/accel.sh@21 -- # val=32 00:24:37.922 08:21:11 -- accel/accel.sh@22 -- # case "$var" in 00:24:37.922 08:21:11 -- accel/accel.sh@20 -- # IFS=: 00:24:37.922 08:21:11 -- accel/accel.sh@20 -- # read -r var val 00:24:37.922 08:21:11 -- accel/accel.sh@21 -- # val=1 00:24:37.922 08:21:11 -- accel/accel.sh@22 -- # case "$var" in 00:24:37.922 08:21:11 -- accel/accel.sh@20 -- # IFS=: 00:24:37.922 08:21:11 -- accel/accel.sh@20 -- # read -r var val 00:24:37.922 08:21:11 -- accel/accel.sh@21 -- # val='1 seconds' 00:24:37.922 08:21:11 -- accel/accel.sh@22 -- # case "$var" in 00:24:37.922 08:21:11 -- accel/accel.sh@20 -- # IFS=: 00:24:37.922 08:21:11 -- accel/accel.sh@20 -- # read -r var val 00:24:37.922 08:21:11 -- accel/accel.sh@21 -- # val=No 00:24:37.922 08:21:11 -- accel/accel.sh@22 -- # case "$var" in 00:24:37.922 08:21:11 -- accel/accel.sh@20 -- # IFS=: 00:24:37.922 08:21:11 -- accel/accel.sh@20 -- # read -r var val 00:24:37.922 08:21:11 -- accel/accel.sh@21 -- # val= 00:24:37.922 08:21:11 -- accel/accel.sh@22 -- # case "$var" in 00:24:37.922 08:21:11 -- accel/accel.sh@20 -- # IFS=: 00:24:37.922 08:21:11 -- accel/accel.sh@20 -- # read -r var val 00:24:37.922 08:21:11 -- accel/accel.sh@21 -- # val= 00:24:37.922 08:21:11 -- accel/accel.sh@22 -- # case "$var" in 00:24:37.922 08:21:11 -- accel/accel.sh@20 -- # IFS=: 00:24:37.922 08:21:11 -- accel/accel.sh@20 -- # read -r var val 00:24:39.301 08:21:12 -- accel/accel.sh@21 -- # val= 00:24:39.301 08:21:12 -- accel/accel.sh@22 -- # case "$var" in 00:24:39.301 08:21:12 -- accel/accel.sh@20 -- # IFS=: 00:24:39.301 08:21:12 -- accel/accel.sh@20 -- # read -r var val 00:24:39.301 08:21:12 -- accel/accel.sh@21 -- # val= 00:24:39.301 08:21:12 -- accel/accel.sh@22 -- # case "$var" in 00:24:39.301 08:21:12 -- accel/accel.sh@20 -- # IFS=: 00:24:39.301 08:21:12 -- accel/accel.sh@20 -- # read -r var val 00:24:39.301 08:21:12 -- accel/accel.sh@21 -- # val= 00:24:39.301 08:21:12 -- accel/accel.sh@22 -- # case "$var" in 00:24:39.301 08:21:12 -- accel/accel.sh@20 -- # IFS=: 00:24:39.301 08:21:12 -- accel/accel.sh@20 -- # read -r var val 00:24:39.301 08:21:12 -- accel/accel.sh@21 -- # val= 00:24:39.301 08:21:12 -- accel/accel.sh@22 -- # case "$var" in 00:24:39.301 08:21:12 -- accel/accel.sh@20 -- # IFS=: 00:24:39.301 08:21:12 -- accel/accel.sh@20 -- # read -r var val 00:24:39.301 08:21:12 -- accel/accel.sh@21 -- # val= 00:24:39.301 08:21:12 -- accel/accel.sh@22 -- # case "$var" in 00:24:39.301 08:21:12 -- accel/accel.sh@20 -- # IFS=: 00:24:39.301 08:21:12 -- accel/accel.sh@20 -- # read -r var val 00:24:39.301 08:21:12 -- accel/accel.sh@21 -- # val= 00:24:39.301 08:21:12 -- accel/accel.sh@22 -- # case "$var" in 00:24:39.301 08:21:12 -- accel/accel.sh@20 -- # IFS=: 00:24:39.301 08:21:12 -- accel/accel.sh@20 -- # read -r var val 00:24:39.301 08:21:12 -- accel/accel.sh@28 -- # [[ -n software ]] 00:24:39.301 08:21:12 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:24:39.301 08:21:12 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:39.301 00:24:39.301 real 0m2.921s 00:24:39.301 user 0m2.543s 00:24:39.301 sys 0m0.185s 00:24:39.301 08:21:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:39.301 08:21:12 -- common/autotest_common.sh@10 -- # set +x 00:24:39.301 ************************************ 00:24:39.301 END TEST accel_dif_verify 00:24:39.301 ************************************ 00:24:39.301 08:21:12 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:24:39.301 08:21:12 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:24:39.301 08:21:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:39.301 08:21:12 -- common/autotest_common.sh@10 -- # set +x 00:24:39.301 ************************************ 00:24:39.301 START TEST accel_dif_generate 00:24:39.301 ************************************ 00:24:39.301 08:21:12 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:24:39.301 08:21:12 -- accel/accel.sh@16 -- # local accel_opc 00:24:39.301 08:21:12 -- accel/accel.sh@17 -- # local accel_module 00:24:39.301 08:21:12 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:24:39.301 08:21:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:24:39.301 08:21:12 -- accel/accel.sh@12 -- # build_accel_config 00:24:39.301 08:21:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:24:39.301 08:21:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:39.301 08:21:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:39.301 08:21:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:24:39.301 08:21:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:24:39.301 08:21:12 -- accel/accel.sh@41 -- # local IFS=, 00:24:39.301 08:21:12 -- accel/accel.sh@42 -- # jq -r . 00:24:39.301 [2024-04-17 08:21:12.308882] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:24:39.301 [2024-04-17 08:21:12.309070] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56953 ] 00:24:39.301 [2024-04-17 08:21:12.450714] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:39.301 [2024-04-17 08:21:12.542134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:40.707 08:21:13 -- accel/accel.sh@18 -- # out=' 00:24:40.707 SPDK Configuration: 00:24:40.707 Core mask: 0x1 00:24:40.707 00:24:40.707 Accel Perf Configuration: 00:24:40.707 Workload Type: dif_generate 00:24:40.707 Vector size: 4096 bytes 00:24:40.707 Transfer size: 4096 bytes 00:24:40.707 Block size: 512 bytes 00:24:40.707 Metadata size: 8 bytes 00:24:40.707 Vector count 1 00:24:40.707 Module: software 00:24:40.707 Queue depth: 32 00:24:40.707 Allocate depth: 32 00:24:40.707 # threads/core: 1 00:24:40.707 Run time: 1 seconds 00:24:40.707 Verify: No 00:24:40.707 00:24:40.707 Running for 1 seconds... 00:24:40.707 00:24:40.707 Core,Thread Transfers Bandwidth Failed Miscompares 00:24:40.707 ------------------------------------------------------------------------------------ 00:24:40.707 0,0 152832/s 606 MiB/s 0 0 00:24:40.707 ==================================================================================== 00:24:40.707 Total 152832/s 597 MiB/s 0 0' 00:24:40.707 08:21:13 -- accel/accel.sh@20 -- # IFS=: 00:24:40.707 08:21:13 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:24:40.707 08:21:13 -- accel/accel.sh@20 -- # read -r var val 00:24:40.707 08:21:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:24:40.707 08:21:13 -- accel/accel.sh@12 -- # build_accel_config 00:24:40.707 08:21:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:24:40.707 08:21:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:40.707 08:21:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:40.707 08:21:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:24:40.707 08:21:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:24:40.707 08:21:13 -- accel/accel.sh@41 -- # local IFS=, 00:24:40.707 08:21:13 -- accel/accel.sh@42 -- # jq -r . 00:24:40.707 [2024-04-17 08:21:13.781399] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:24:40.707 [2024-04-17 08:21:13.781545] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56977 ] 00:24:40.707 [2024-04-17 08:21:13.920545] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:40.707 [2024-04-17 08:21:14.024798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:40.967 08:21:14 -- accel/accel.sh@21 -- # val= 00:24:40.967 08:21:14 -- accel/accel.sh@22 -- # case "$var" in 00:24:40.967 08:21:14 -- accel/accel.sh@20 -- # IFS=: 00:24:40.967 08:21:14 -- accel/accel.sh@20 -- # read -r var val 00:24:40.968 08:21:14 -- accel/accel.sh@21 -- # val= 00:24:40.968 08:21:14 -- accel/accel.sh@22 -- # case "$var" in 00:24:40.968 08:21:14 -- accel/accel.sh@20 -- # IFS=: 00:24:40.968 08:21:14 -- accel/accel.sh@20 -- # read -r var val 00:24:40.968 08:21:14 -- accel/accel.sh@21 -- # val=0x1 00:24:40.968 08:21:14 -- accel/accel.sh@22 -- # case "$var" in 00:24:40.968 08:21:14 -- accel/accel.sh@20 -- # IFS=: 00:24:40.968 08:21:14 -- accel/accel.sh@20 -- # read -r var val 00:24:40.968 08:21:14 -- accel/accel.sh@21 -- # val= 00:24:40.968 08:21:14 -- accel/accel.sh@22 -- # case "$var" in 00:24:40.968 08:21:14 -- accel/accel.sh@20 -- # IFS=: 00:24:40.968 08:21:14 -- accel/accel.sh@20 -- # read -r var val 00:24:40.968 08:21:14 -- accel/accel.sh@21 -- # val= 00:24:40.968 08:21:14 -- accel/accel.sh@22 -- # case "$var" in 00:24:40.968 08:21:14 -- accel/accel.sh@20 -- # IFS=: 00:24:40.968 08:21:14 -- accel/accel.sh@20 -- # read -r var val 00:24:40.968 08:21:14 -- accel/accel.sh@21 -- # val=dif_generate 00:24:40.968 08:21:14 -- accel/accel.sh@22 -- # case "$var" in 00:24:40.968 08:21:14 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:24:40.968 08:21:14 -- accel/accel.sh@20 -- # IFS=: 00:24:40.968 08:21:14 -- accel/accel.sh@20 -- # read -r var val 00:24:40.968 08:21:14 -- accel/accel.sh@21 -- # val='4096 bytes' 00:24:40.968 08:21:14 -- accel/accel.sh@22 -- # case "$var" in 00:24:40.968 08:21:14 -- accel/accel.sh@20 -- # IFS=: 00:24:40.968 08:21:14 -- accel/accel.sh@20 -- # read -r var val 00:24:40.968 08:21:14 -- accel/accel.sh@21 -- # val='4096 bytes' 00:24:40.968 08:21:14 -- accel/accel.sh@22 -- # case "$var" in 00:24:40.968 08:21:14 -- accel/accel.sh@20 -- # IFS=: 00:24:40.968 08:21:14 -- accel/accel.sh@20 -- # read -r var val 00:24:40.968 08:21:14 -- accel/accel.sh@21 -- # val='512 bytes' 00:24:40.968 08:21:14 -- accel/accel.sh@22 -- # case "$var" in 00:24:40.968 08:21:14 -- accel/accel.sh@20 -- # IFS=: 00:24:40.968 08:21:14 -- accel/accel.sh@20 -- # read -r var val 00:24:40.968 08:21:14 -- accel/accel.sh@21 -- # val='8 bytes' 00:24:40.968 08:21:14 -- accel/accel.sh@22 -- # case "$var" in 00:24:40.968 08:21:14 -- accel/accel.sh@20 -- # IFS=: 00:24:40.968 08:21:14 -- accel/accel.sh@20 -- # read -r var val 00:24:40.968 08:21:14 -- accel/accel.sh@21 -- # val= 00:24:40.968 08:21:14 -- accel/accel.sh@22 -- # case "$var" in 00:24:40.968 08:21:14 -- accel/accel.sh@20 -- # IFS=: 00:24:40.968 08:21:14 -- accel/accel.sh@20 -- # read -r var val 00:24:40.968 08:21:14 -- accel/accel.sh@21 -- # val=software 00:24:40.968 08:21:14 -- accel/accel.sh@22 -- # case "$var" in 00:24:40.968 08:21:14 -- accel/accel.sh@23 -- # accel_module=software 00:24:40.968 08:21:14 -- accel/accel.sh@20 -- # IFS=: 00:24:40.968 08:21:14 -- accel/accel.sh@20 -- # read -r var val 00:24:40.968 08:21:14 -- accel/accel.sh@21 -- # val=32 00:24:40.968 08:21:14 -- accel/accel.sh@22 -- # case "$var" in 00:24:40.968 08:21:14 -- accel/accel.sh@20 -- # IFS=: 00:24:40.968 08:21:14 -- accel/accel.sh@20 -- # read -r var val 00:24:40.968 08:21:14 -- accel/accel.sh@21 -- # val=32 00:24:40.968 08:21:14 -- accel/accel.sh@22 -- # case "$var" in 00:24:40.968 08:21:14 -- accel/accel.sh@20 -- # IFS=: 00:24:40.968 08:21:14 -- accel/accel.sh@20 -- # read -r var val 00:24:40.968 08:21:14 -- accel/accel.sh@21 -- # val=1 00:24:40.968 08:21:14 -- accel/accel.sh@22 -- # case "$var" in 00:24:40.968 08:21:14 -- accel/accel.sh@20 -- # IFS=: 00:24:40.968 08:21:14 -- accel/accel.sh@20 -- # read -r var val 00:24:40.968 08:21:14 -- accel/accel.sh@21 -- # val='1 seconds' 00:24:40.968 08:21:14 -- accel/accel.sh@22 -- # case "$var" in 00:24:40.968 08:21:14 -- accel/accel.sh@20 -- # IFS=: 00:24:40.968 08:21:14 -- accel/accel.sh@20 -- # read -r var val 00:24:40.968 08:21:14 -- accel/accel.sh@21 -- # val=No 00:24:40.968 08:21:14 -- accel/accel.sh@22 -- # case "$var" in 00:24:40.968 08:21:14 -- accel/accel.sh@20 -- # IFS=: 00:24:40.968 08:21:14 -- accel/accel.sh@20 -- # read -r var val 00:24:40.968 08:21:14 -- accel/accel.sh@21 -- # val= 00:24:40.968 08:21:14 -- accel/accel.sh@22 -- # case "$var" in 00:24:40.968 08:21:14 -- accel/accel.sh@20 -- # IFS=: 00:24:40.968 08:21:14 -- accel/accel.sh@20 -- # read -r var val 00:24:40.968 08:21:14 -- accel/accel.sh@21 -- # val= 00:24:40.968 08:21:14 -- accel/accel.sh@22 -- # case "$var" in 00:24:40.968 08:21:14 -- accel/accel.sh@20 -- # IFS=: 00:24:40.968 08:21:14 -- accel/accel.sh@20 -- # read -r var val 00:24:41.906 08:21:15 -- accel/accel.sh@21 -- # val= 00:24:41.906 08:21:15 -- accel/accel.sh@22 -- # case "$var" in 00:24:41.906 08:21:15 -- accel/accel.sh@20 -- # IFS=: 00:24:41.906 08:21:15 -- accel/accel.sh@20 -- # read -r var val 00:24:41.906 08:21:15 -- accel/accel.sh@21 -- # val= 00:24:41.906 08:21:15 -- accel/accel.sh@22 -- # case "$var" in 00:24:41.906 08:21:15 -- accel/accel.sh@20 -- # IFS=: 00:24:41.906 08:21:15 -- accel/accel.sh@20 -- # read -r var val 00:24:41.906 08:21:15 -- accel/accel.sh@21 -- # val= 00:24:41.906 08:21:15 -- accel/accel.sh@22 -- # case "$var" in 00:24:41.906 08:21:15 -- accel/accel.sh@20 -- # IFS=: 00:24:41.906 08:21:15 -- accel/accel.sh@20 -- # read -r var val 00:24:41.906 08:21:15 -- accel/accel.sh@21 -- # val= 00:24:41.906 08:21:15 -- accel/accel.sh@22 -- # case "$var" in 00:24:41.906 08:21:15 -- accel/accel.sh@20 -- # IFS=: 00:24:41.906 08:21:15 -- accel/accel.sh@20 -- # read -r var val 00:24:41.906 08:21:15 -- accel/accel.sh@21 -- # val= 00:24:41.906 08:21:15 -- accel/accel.sh@22 -- # case "$var" in 00:24:41.906 08:21:15 -- accel/accel.sh@20 -- # IFS=: 00:24:41.906 08:21:15 -- accel/accel.sh@20 -- # read -r var val 00:24:41.906 08:21:15 -- accel/accel.sh@21 -- # val= 00:24:41.906 08:21:15 -- accel/accel.sh@22 -- # case "$var" in 00:24:41.906 08:21:15 -- accel/accel.sh@20 -- # IFS=: 00:24:41.906 08:21:15 -- accel/accel.sh@20 -- # read -r var val 00:24:41.906 08:21:15 -- accel/accel.sh@28 -- # [[ -n software ]] 00:24:41.906 08:21:15 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:24:41.906 08:21:15 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:41.906 00:24:41.906 real 0m2.962s 00:24:41.906 user 0m2.574s 00:24:41.906 sys 0m0.191s 00:24:41.906 08:21:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:41.906 08:21:15 -- common/autotest_common.sh@10 -- # set +x 00:24:41.906 ************************************ 00:24:42.165 END TEST accel_dif_generate 00:24:42.165 ************************************ 00:24:42.165 08:21:15 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:24:42.165 08:21:15 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:24:42.165 08:21:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:42.165 08:21:15 -- common/autotest_common.sh@10 -- # set +x 00:24:42.165 ************************************ 00:24:42.165 START TEST accel_dif_generate_copy 00:24:42.165 ************************************ 00:24:42.165 08:21:15 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:24:42.165 08:21:15 -- accel/accel.sh@16 -- # local accel_opc 00:24:42.165 08:21:15 -- accel/accel.sh@17 -- # local accel_module 00:24:42.165 08:21:15 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:24:42.165 08:21:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:24:42.165 08:21:15 -- accel/accel.sh@12 -- # build_accel_config 00:24:42.165 08:21:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:24:42.165 08:21:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:42.165 08:21:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:42.165 08:21:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:24:42.165 08:21:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:24:42.165 08:21:15 -- accel/accel.sh@41 -- # local IFS=, 00:24:42.165 08:21:15 -- accel/accel.sh@42 -- # jq -r . 00:24:42.165 [2024-04-17 08:21:15.328911] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:24:42.165 [2024-04-17 08:21:15.329061] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57007 ] 00:24:42.165 [2024-04-17 08:21:15.470682] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:42.425 [2024-04-17 08:21:15.577564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:43.806 08:21:16 -- accel/accel.sh@18 -- # out=' 00:24:43.806 SPDK Configuration: 00:24:43.806 Core mask: 0x1 00:24:43.806 00:24:43.806 Accel Perf Configuration: 00:24:43.806 Workload Type: dif_generate_copy 00:24:43.806 Vector size: 4096 bytes 00:24:43.806 Transfer size: 4096 bytes 00:24:43.806 Vector count 1 00:24:43.806 Module: software 00:24:43.806 Queue depth: 32 00:24:43.806 Allocate depth: 32 00:24:43.806 # threads/core: 1 00:24:43.806 Run time: 1 seconds 00:24:43.806 Verify: No 00:24:43.806 00:24:43.806 Running for 1 seconds... 00:24:43.806 00:24:43.806 Core,Thread Transfers Bandwidth Failed Miscompares 00:24:43.806 ------------------------------------------------------------------------------------ 00:24:43.806 0,0 110272/s 437 MiB/s 0 0 00:24:43.806 ==================================================================================== 00:24:43.806 Total 110272/s 430 MiB/s 0 0' 00:24:43.806 08:21:16 -- accel/accel.sh@20 -- # IFS=: 00:24:43.806 08:21:16 -- accel/accel.sh@20 -- # read -r var val 00:24:43.806 08:21:16 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:24:43.806 08:21:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:24:43.806 08:21:16 -- accel/accel.sh@12 -- # build_accel_config 00:24:43.806 08:21:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:24:43.806 08:21:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:43.806 08:21:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:43.806 08:21:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:24:43.806 08:21:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:24:43.806 08:21:16 -- accel/accel.sh@41 -- # local IFS=, 00:24:43.806 08:21:16 -- accel/accel.sh@42 -- # jq -r . 00:24:43.806 [2024-04-17 08:21:16.817207] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:24:43.806 [2024-04-17 08:21:16.817295] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57021 ] 00:24:43.806 [2024-04-17 08:21:16.957543] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:43.806 [2024-04-17 08:21:17.046258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:43.806 08:21:17 -- accel/accel.sh@21 -- # val= 00:24:43.806 08:21:17 -- accel/accel.sh@22 -- # case "$var" in 00:24:43.806 08:21:17 -- accel/accel.sh@20 -- # IFS=: 00:24:43.806 08:21:17 -- accel/accel.sh@20 -- # read -r var val 00:24:43.806 08:21:17 -- accel/accel.sh@21 -- # val= 00:24:43.806 08:21:17 -- accel/accel.sh@22 -- # case "$var" in 00:24:43.806 08:21:17 -- accel/accel.sh@20 -- # IFS=: 00:24:43.806 08:21:17 -- accel/accel.sh@20 -- # read -r var val 00:24:43.806 08:21:17 -- accel/accel.sh@21 -- # val=0x1 00:24:43.806 08:21:17 -- accel/accel.sh@22 -- # case "$var" in 00:24:43.806 08:21:17 -- accel/accel.sh@20 -- # IFS=: 00:24:43.806 08:21:17 -- accel/accel.sh@20 -- # read -r var val 00:24:43.806 08:21:17 -- accel/accel.sh@21 -- # val= 00:24:43.806 08:21:17 -- accel/accel.sh@22 -- # case "$var" in 00:24:43.806 08:21:17 -- accel/accel.sh@20 -- # IFS=: 00:24:43.806 08:21:17 -- accel/accel.sh@20 -- # read -r var val 00:24:43.806 08:21:17 -- accel/accel.sh@21 -- # val= 00:24:43.806 08:21:17 -- accel/accel.sh@22 -- # case "$var" in 00:24:43.806 08:21:17 -- accel/accel.sh@20 -- # IFS=: 00:24:43.806 08:21:17 -- accel/accel.sh@20 -- # read -r var val 00:24:43.806 08:21:17 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:24:43.806 08:21:17 -- accel/accel.sh@22 -- # case "$var" in 00:24:43.806 08:21:17 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:24:43.806 08:21:17 -- accel/accel.sh@20 -- # IFS=: 00:24:43.806 08:21:17 -- accel/accel.sh@20 -- # read -r var val 00:24:43.806 08:21:17 -- accel/accel.sh@21 -- # val='4096 bytes' 00:24:43.806 08:21:17 -- accel/accel.sh@22 -- # case "$var" in 00:24:43.806 08:21:17 -- accel/accel.sh@20 -- # IFS=: 00:24:43.806 08:21:17 -- accel/accel.sh@20 -- # read -r var val 00:24:43.806 08:21:17 -- accel/accel.sh@21 -- # val='4096 bytes' 00:24:43.806 08:21:17 -- accel/accel.sh@22 -- # case "$var" in 00:24:43.806 08:21:17 -- accel/accel.sh@20 -- # IFS=: 00:24:43.806 08:21:17 -- accel/accel.sh@20 -- # read -r var val 00:24:43.806 08:21:17 -- accel/accel.sh@21 -- # val= 00:24:43.806 08:21:17 -- accel/accel.sh@22 -- # case "$var" in 00:24:43.806 08:21:17 -- accel/accel.sh@20 -- # IFS=: 00:24:43.806 08:21:17 -- accel/accel.sh@20 -- # read -r var val 00:24:43.806 08:21:17 -- accel/accel.sh@21 -- # val=software 00:24:43.806 08:21:17 -- accel/accel.sh@22 -- # case "$var" in 00:24:43.806 08:21:17 -- accel/accel.sh@23 -- # accel_module=software 00:24:43.806 08:21:17 -- accel/accel.sh@20 -- # IFS=: 00:24:43.806 08:21:17 -- accel/accel.sh@20 -- # read -r var val 00:24:43.806 08:21:17 -- accel/accel.sh@21 -- # val=32 00:24:43.806 08:21:17 -- accel/accel.sh@22 -- # case "$var" in 00:24:43.806 08:21:17 -- accel/accel.sh@20 -- # IFS=: 00:24:43.806 08:21:17 -- accel/accel.sh@20 -- # read -r var val 00:24:43.806 08:21:17 -- accel/accel.sh@21 -- # val=32 00:24:43.806 08:21:17 -- accel/accel.sh@22 -- # case "$var" in 00:24:43.806 08:21:17 -- accel/accel.sh@20 -- # IFS=: 00:24:43.806 08:21:17 -- accel/accel.sh@20 -- # read -r var val 00:24:43.806 08:21:17 -- accel/accel.sh@21 -- # val=1 00:24:43.806 08:21:17 -- accel/accel.sh@22 -- # case "$var" in 00:24:43.806 08:21:17 -- accel/accel.sh@20 -- # IFS=: 00:24:43.806 08:21:17 -- accel/accel.sh@20 -- # read -r var val 00:24:43.806 08:21:17 -- accel/accel.sh@21 -- # val='1 seconds' 00:24:43.806 08:21:17 -- accel/accel.sh@22 -- # case "$var" in 00:24:43.806 08:21:17 -- accel/accel.sh@20 -- # IFS=: 00:24:43.806 08:21:17 -- accel/accel.sh@20 -- # read -r var val 00:24:43.806 08:21:17 -- accel/accel.sh@21 -- # val=No 00:24:43.806 08:21:17 -- accel/accel.sh@22 -- # case "$var" in 00:24:43.806 08:21:17 -- accel/accel.sh@20 -- # IFS=: 00:24:43.806 08:21:17 -- accel/accel.sh@20 -- # read -r var val 00:24:43.806 08:21:17 -- accel/accel.sh@21 -- # val= 00:24:43.806 08:21:17 -- accel/accel.sh@22 -- # case "$var" in 00:24:43.806 08:21:17 -- accel/accel.sh@20 -- # IFS=: 00:24:43.806 08:21:17 -- accel/accel.sh@20 -- # read -r var val 00:24:43.806 08:21:17 -- accel/accel.sh@21 -- # val= 00:24:43.806 08:21:17 -- accel/accel.sh@22 -- # case "$var" in 00:24:43.806 08:21:17 -- accel/accel.sh@20 -- # IFS=: 00:24:43.806 08:21:17 -- accel/accel.sh@20 -- # read -r var val 00:24:45.191 08:21:18 -- accel/accel.sh@21 -- # val= 00:24:45.191 08:21:18 -- accel/accel.sh@22 -- # case "$var" in 00:24:45.191 08:21:18 -- accel/accel.sh@20 -- # IFS=: 00:24:45.191 08:21:18 -- accel/accel.sh@20 -- # read -r var val 00:24:45.191 08:21:18 -- accel/accel.sh@21 -- # val= 00:24:45.191 08:21:18 -- accel/accel.sh@22 -- # case "$var" in 00:24:45.191 08:21:18 -- accel/accel.sh@20 -- # IFS=: 00:24:45.191 08:21:18 -- accel/accel.sh@20 -- # read -r var val 00:24:45.191 08:21:18 -- accel/accel.sh@21 -- # val= 00:24:45.191 08:21:18 -- accel/accel.sh@22 -- # case "$var" in 00:24:45.191 08:21:18 -- accel/accel.sh@20 -- # IFS=: 00:24:45.191 08:21:18 -- accel/accel.sh@20 -- # read -r var val 00:24:45.191 08:21:18 -- accel/accel.sh@21 -- # val= 00:24:45.191 08:21:18 -- accel/accel.sh@22 -- # case "$var" in 00:24:45.191 08:21:18 -- accel/accel.sh@20 -- # IFS=: 00:24:45.191 08:21:18 -- accel/accel.sh@20 -- # read -r var val 00:24:45.191 08:21:18 -- accel/accel.sh@21 -- # val= 00:24:45.191 08:21:18 -- accel/accel.sh@22 -- # case "$var" in 00:24:45.191 08:21:18 -- accel/accel.sh@20 -- # IFS=: 00:24:45.191 08:21:18 -- accel/accel.sh@20 -- # read -r var val 00:24:45.191 08:21:18 -- accel/accel.sh@21 -- # val= 00:24:45.191 08:21:18 -- accel/accel.sh@22 -- # case "$var" in 00:24:45.191 08:21:18 -- accel/accel.sh@20 -- # IFS=: 00:24:45.192 08:21:18 -- accel/accel.sh@20 -- # read -r var val 00:24:45.192 08:21:18 -- accel/accel.sh@28 -- # [[ -n software ]] 00:24:45.192 08:21:18 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:24:45.192 08:21:18 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:45.192 00:24:45.192 real 0m2.962s 00:24:45.192 user 0m2.580s 00:24:45.192 sys 0m0.185s 00:24:45.192 08:21:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:45.192 08:21:18 -- common/autotest_common.sh@10 -- # set +x 00:24:45.192 ************************************ 00:24:45.192 END TEST accel_dif_generate_copy 00:24:45.192 ************************************ 00:24:45.192 08:21:18 -- accel/accel.sh@107 -- # [[ y == y ]] 00:24:45.192 08:21:18 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:24:45.192 08:21:18 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:24:45.192 08:21:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:45.192 08:21:18 -- common/autotest_common.sh@10 -- # set +x 00:24:45.192 ************************************ 00:24:45.192 START TEST accel_comp 00:24:45.192 ************************************ 00:24:45.192 08:21:18 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:24:45.192 08:21:18 -- accel/accel.sh@16 -- # local accel_opc 00:24:45.192 08:21:18 -- accel/accel.sh@17 -- # local accel_module 00:24:45.192 08:21:18 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:24:45.192 08:21:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:24:45.192 08:21:18 -- accel/accel.sh@12 -- # build_accel_config 00:24:45.192 08:21:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:24:45.192 08:21:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:45.192 08:21:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:45.192 08:21:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:24:45.192 08:21:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:24:45.192 08:21:18 -- accel/accel.sh@41 -- # local IFS=, 00:24:45.192 08:21:18 -- accel/accel.sh@42 -- # jq -r . 00:24:45.192 [2024-04-17 08:21:18.357998] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:24:45.192 [2024-04-17 08:21:18.358278] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57061 ] 00:24:45.192 [2024-04-17 08:21:18.499600] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:45.452 [2024-04-17 08:21:18.603078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:46.830 08:21:19 -- accel/accel.sh@18 -- # out='Preparing input file... 00:24:46.830 00:24:46.830 SPDK Configuration: 00:24:46.830 Core mask: 0x1 00:24:46.830 00:24:46.830 Accel Perf Configuration: 00:24:46.830 Workload Type: compress 00:24:46.830 Transfer size: 4096 bytes 00:24:46.830 Vector count 1 00:24:46.830 Module: software 00:24:46.830 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:24:46.830 Queue depth: 32 00:24:46.830 Allocate depth: 32 00:24:46.830 # threads/core: 1 00:24:46.830 Run time: 1 seconds 00:24:46.830 Verify: No 00:24:46.830 00:24:46.830 Running for 1 seconds... 00:24:46.830 00:24:46.830 Core,Thread Transfers Bandwidth Failed Miscompares 00:24:46.830 ------------------------------------------------------------------------------------ 00:24:46.830 0,0 51456/s 214 MiB/s 0 0 00:24:46.830 ==================================================================================== 00:24:46.830 Total 51456/s 201 MiB/s 0 0' 00:24:46.830 08:21:19 -- accel/accel.sh@20 -- # IFS=: 00:24:46.830 08:21:19 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:24:46.830 08:21:19 -- accel/accel.sh@20 -- # read -r var val 00:24:46.830 08:21:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:24:46.830 08:21:19 -- accel/accel.sh@12 -- # build_accel_config 00:24:46.830 08:21:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:24:46.830 08:21:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:46.830 08:21:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:46.830 08:21:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:24:46.830 08:21:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:24:46.830 08:21:19 -- accel/accel.sh@41 -- # local IFS=, 00:24:46.830 08:21:19 -- accel/accel.sh@42 -- # jq -r . 00:24:46.830 [2024-04-17 08:21:19.835522] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:24:46.830 [2024-04-17 08:21:19.835584] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57075 ] 00:24:46.830 [2024-04-17 08:21:19.972463] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:46.830 [2024-04-17 08:21:20.071487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:46.830 08:21:20 -- accel/accel.sh@21 -- # val= 00:24:46.830 08:21:20 -- accel/accel.sh@22 -- # case "$var" in 00:24:46.830 08:21:20 -- accel/accel.sh@20 -- # IFS=: 00:24:46.830 08:21:20 -- accel/accel.sh@20 -- # read -r var val 00:24:46.830 08:21:20 -- accel/accel.sh@21 -- # val= 00:24:46.830 08:21:20 -- accel/accel.sh@22 -- # case "$var" in 00:24:46.830 08:21:20 -- accel/accel.sh@20 -- # IFS=: 00:24:46.830 08:21:20 -- accel/accel.sh@20 -- # read -r var val 00:24:46.830 08:21:20 -- accel/accel.sh@21 -- # val= 00:24:46.830 08:21:20 -- accel/accel.sh@22 -- # case "$var" in 00:24:46.830 08:21:20 -- accel/accel.sh@20 -- # IFS=: 00:24:46.830 08:21:20 -- accel/accel.sh@20 -- # read -r var val 00:24:46.830 08:21:20 -- accel/accel.sh@21 -- # val=0x1 00:24:46.830 08:21:20 -- accel/accel.sh@22 -- # case "$var" in 00:24:46.830 08:21:20 -- accel/accel.sh@20 -- # IFS=: 00:24:46.830 08:21:20 -- accel/accel.sh@20 -- # read -r var val 00:24:46.830 08:21:20 -- accel/accel.sh@21 -- # val= 00:24:46.830 08:21:20 -- accel/accel.sh@22 -- # case "$var" in 00:24:46.830 08:21:20 -- accel/accel.sh@20 -- # IFS=: 00:24:46.830 08:21:20 -- accel/accel.sh@20 -- # read -r var val 00:24:46.830 08:21:20 -- accel/accel.sh@21 -- # val= 00:24:46.830 08:21:20 -- accel/accel.sh@22 -- # case "$var" in 00:24:46.830 08:21:20 -- accel/accel.sh@20 -- # IFS=: 00:24:46.830 08:21:20 -- accel/accel.sh@20 -- # read -r var val 00:24:46.830 08:21:20 -- accel/accel.sh@21 -- # val=compress 00:24:46.830 08:21:20 -- accel/accel.sh@22 -- # case "$var" in 00:24:46.830 08:21:20 -- accel/accel.sh@24 -- # accel_opc=compress 00:24:46.830 08:21:20 -- accel/accel.sh@20 -- # IFS=: 00:24:46.830 08:21:20 -- accel/accel.sh@20 -- # read -r var val 00:24:46.830 08:21:20 -- accel/accel.sh@21 -- # val='4096 bytes' 00:24:46.830 08:21:20 -- accel/accel.sh@22 -- # case "$var" in 00:24:46.830 08:21:20 -- accel/accel.sh@20 -- # IFS=: 00:24:46.830 08:21:20 -- accel/accel.sh@20 -- # read -r var val 00:24:46.830 08:21:20 -- accel/accel.sh@21 -- # val= 00:24:46.830 08:21:20 -- accel/accel.sh@22 -- # case "$var" in 00:24:46.830 08:21:20 -- accel/accel.sh@20 -- # IFS=: 00:24:46.830 08:21:20 -- accel/accel.sh@20 -- # read -r var val 00:24:46.830 08:21:20 -- accel/accel.sh@21 -- # val=software 00:24:46.830 08:21:20 -- accel/accel.sh@22 -- # case "$var" in 00:24:46.830 08:21:20 -- accel/accel.sh@23 -- # accel_module=software 00:24:46.830 08:21:20 -- accel/accel.sh@20 -- # IFS=: 00:24:46.830 08:21:20 -- accel/accel.sh@20 -- # read -r var val 00:24:46.830 08:21:20 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:24:46.830 08:21:20 -- accel/accel.sh@22 -- # case "$var" in 00:24:46.830 08:21:20 -- accel/accel.sh@20 -- # IFS=: 00:24:46.830 08:21:20 -- accel/accel.sh@20 -- # read -r var val 00:24:46.830 08:21:20 -- accel/accel.sh@21 -- # val=32 00:24:46.830 08:21:20 -- accel/accel.sh@22 -- # case "$var" in 00:24:46.830 08:21:20 -- accel/accel.sh@20 -- # IFS=: 00:24:46.830 08:21:20 -- accel/accel.sh@20 -- # read -r var val 00:24:46.830 08:21:20 -- accel/accel.sh@21 -- # val=32 00:24:46.830 08:21:20 -- accel/accel.sh@22 -- # case "$var" in 00:24:46.830 08:21:20 -- accel/accel.sh@20 -- # IFS=: 00:24:46.830 08:21:20 -- accel/accel.sh@20 -- # read -r var val 00:24:46.830 08:21:20 -- accel/accel.sh@21 -- # val=1 00:24:46.830 08:21:20 -- accel/accel.sh@22 -- # case "$var" in 00:24:46.830 08:21:20 -- accel/accel.sh@20 -- # IFS=: 00:24:46.830 08:21:20 -- accel/accel.sh@20 -- # read -r var val 00:24:46.830 08:21:20 -- accel/accel.sh@21 -- # val='1 seconds' 00:24:46.830 08:21:20 -- accel/accel.sh@22 -- # case "$var" in 00:24:46.830 08:21:20 -- accel/accel.sh@20 -- # IFS=: 00:24:46.830 08:21:20 -- accel/accel.sh@20 -- # read -r var val 00:24:46.830 08:21:20 -- accel/accel.sh@21 -- # val=No 00:24:46.830 08:21:20 -- accel/accel.sh@22 -- # case "$var" in 00:24:46.830 08:21:20 -- accel/accel.sh@20 -- # IFS=: 00:24:46.830 08:21:20 -- accel/accel.sh@20 -- # read -r var val 00:24:46.830 08:21:20 -- accel/accel.sh@21 -- # val= 00:24:46.830 08:21:20 -- accel/accel.sh@22 -- # case "$var" in 00:24:46.830 08:21:20 -- accel/accel.sh@20 -- # IFS=: 00:24:46.830 08:21:20 -- accel/accel.sh@20 -- # read -r var val 00:24:46.830 08:21:20 -- accel/accel.sh@21 -- # val= 00:24:46.830 08:21:20 -- accel/accel.sh@22 -- # case "$var" in 00:24:46.830 08:21:20 -- accel/accel.sh@20 -- # IFS=: 00:24:46.830 08:21:20 -- accel/accel.sh@20 -- # read -r var val 00:24:48.208 08:21:21 -- accel/accel.sh@21 -- # val= 00:24:48.208 08:21:21 -- accel/accel.sh@22 -- # case "$var" in 00:24:48.208 08:21:21 -- accel/accel.sh@20 -- # IFS=: 00:24:48.208 08:21:21 -- accel/accel.sh@20 -- # read -r var val 00:24:48.208 08:21:21 -- accel/accel.sh@21 -- # val= 00:24:48.208 08:21:21 -- accel/accel.sh@22 -- # case "$var" in 00:24:48.209 08:21:21 -- accel/accel.sh@20 -- # IFS=: 00:24:48.209 08:21:21 -- accel/accel.sh@20 -- # read -r var val 00:24:48.209 08:21:21 -- accel/accel.sh@21 -- # val= 00:24:48.209 08:21:21 -- accel/accel.sh@22 -- # case "$var" in 00:24:48.209 08:21:21 -- accel/accel.sh@20 -- # IFS=: 00:24:48.209 08:21:21 -- accel/accel.sh@20 -- # read -r var val 00:24:48.209 08:21:21 -- accel/accel.sh@21 -- # val= 00:24:48.209 08:21:21 -- accel/accel.sh@22 -- # case "$var" in 00:24:48.209 08:21:21 -- accel/accel.sh@20 -- # IFS=: 00:24:48.209 08:21:21 -- accel/accel.sh@20 -- # read -r var val 00:24:48.209 08:21:21 -- accel/accel.sh@21 -- # val= 00:24:48.209 08:21:21 -- accel/accel.sh@22 -- # case "$var" in 00:24:48.209 08:21:21 -- accel/accel.sh@20 -- # IFS=: 00:24:48.209 08:21:21 -- accel/accel.sh@20 -- # read -r var val 00:24:48.209 08:21:21 -- accel/accel.sh@21 -- # val= 00:24:48.209 08:21:21 -- accel/accel.sh@22 -- # case "$var" in 00:24:48.209 08:21:21 -- accel/accel.sh@20 -- # IFS=: 00:24:48.209 08:21:21 -- accel/accel.sh@20 -- # read -r var val 00:24:48.209 08:21:21 -- accel/accel.sh@28 -- # [[ -n software ]] 00:24:48.209 08:21:21 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:24:48.209 08:21:21 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:48.209 00:24:48.209 real 0m2.967s 00:24:48.209 user 0m2.589s 00:24:48.209 sys 0m0.180s 00:24:48.209 08:21:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:48.209 08:21:21 -- common/autotest_common.sh@10 -- # set +x 00:24:48.209 ************************************ 00:24:48.209 END TEST accel_comp 00:24:48.209 ************************************ 00:24:48.209 08:21:21 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:24:48.209 08:21:21 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:24:48.209 08:21:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:48.209 08:21:21 -- common/autotest_common.sh@10 -- # set +x 00:24:48.209 ************************************ 00:24:48.209 START TEST accel_decomp 00:24:48.209 ************************************ 00:24:48.209 08:21:21 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:24:48.209 08:21:21 -- accel/accel.sh@16 -- # local accel_opc 00:24:48.209 08:21:21 -- accel/accel.sh@17 -- # local accel_module 00:24:48.209 08:21:21 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:24:48.209 08:21:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:24:48.209 08:21:21 -- accel/accel.sh@12 -- # build_accel_config 00:24:48.209 08:21:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:24:48.209 08:21:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:48.209 08:21:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:48.209 08:21:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:24:48.209 08:21:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:24:48.209 08:21:21 -- accel/accel.sh@41 -- # local IFS=, 00:24:48.209 08:21:21 -- accel/accel.sh@42 -- # jq -r . 00:24:48.209 [2024-04-17 08:21:21.375374] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:24:48.209 [2024-04-17 08:21:21.375575] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57117 ] 00:24:48.209 [2024-04-17 08:21:21.517671] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:48.467 [2024-04-17 08:21:21.608060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:49.872 08:21:22 -- accel/accel.sh@18 -- # out='Preparing input file... 00:24:49.872 00:24:49.872 SPDK Configuration: 00:24:49.872 Core mask: 0x1 00:24:49.872 00:24:49.872 Accel Perf Configuration: 00:24:49.872 Workload Type: decompress 00:24:49.872 Transfer size: 4096 bytes 00:24:49.872 Vector count 1 00:24:49.872 Module: software 00:24:49.872 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:24:49.872 Queue depth: 32 00:24:49.872 Allocate depth: 32 00:24:49.872 # threads/core: 1 00:24:49.872 Run time: 1 seconds 00:24:49.872 Verify: Yes 00:24:49.872 00:24:49.872 Running for 1 seconds... 00:24:49.872 00:24:49.872 Core,Thread Transfers Bandwidth Failed Miscompares 00:24:49.872 ------------------------------------------------------------------------------------ 00:24:49.872 0,0 56960/s 104 MiB/s 0 0 00:24:49.872 ==================================================================================== 00:24:49.872 Total 56960/s 222 MiB/s 0 0' 00:24:49.872 08:21:22 -- accel/accel.sh@20 -- # IFS=: 00:24:49.872 08:21:22 -- accel/accel.sh@20 -- # read -r var val 00:24:49.872 08:21:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:24:49.872 08:21:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:24:49.872 08:21:22 -- accel/accel.sh@12 -- # build_accel_config 00:24:49.872 08:21:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:24:49.872 08:21:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:49.872 08:21:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:49.872 08:21:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:24:49.872 08:21:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:24:49.872 08:21:22 -- accel/accel.sh@41 -- # local IFS=, 00:24:49.872 08:21:22 -- accel/accel.sh@42 -- # jq -r . 00:24:49.872 [2024-04-17 08:21:22.841046] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:24:49.872 [2024-04-17 08:21:22.841104] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57131 ] 00:24:49.872 [2024-04-17 08:21:22.978104] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:49.872 [2024-04-17 08:21:23.075615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:49.872 08:21:23 -- accel/accel.sh@21 -- # val= 00:24:49.872 08:21:23 -- accel/accel.sh@22 -- # case "$var" in 00:24:49.872 08:21:23 -- accel/accel.sh@20 -- # IFS=: 00:24:49.872 08:21:23 -- accel/accel.sh@20 -- # read -r var val 00:24:49.872 08:21:23 -- accel/accel.sh@21 -- # val= 00:24:49.872 08:21:23 -- accel/accel.sh@22 -- # case "$var" in 00:24:49.872 08:21:23 -- accel/accel.sh@20 -- # IFS=: 00:24:49.872 08:21:23 -- accel/accel.sh@20 -- # read -r var val 00:24:49.872 08:21:23 -- accel/accel.sh@21 -- # val= 00:24:49.872 08:21:23 -- accel/accel.sh@22 -- # case "$var" in 00:24:49.872 08:21:23 -- accel/accel.sh@20 -- # IFS=: 00:24:49.872 08:21:23 -- accel/accel.sh@20 -- # read -r var val 00:24:49.872 08:21:23 -- accel/accel.sh@21 -- # val=0x1 00:24:49.872 08:21:23 -- accel/accel.sh@22 -- # case "$var" in 00:24:49.872 08:21:23 -- accel/accel.sh@20 -- # IFS=: 00:24:49.872 08:21:23 -- accel/accel.sh@20 -- # read -r var val 00:24:49.872 08:21:23 -- accel/accel.sh@21 -- # val= 00:24:49.872 08:21:23 -- accel/accel.sh@22 -- # case "$var" in 00:24:49.872 08:21:23 -- accel/accel.sh@20 -- # IFS=: 00:24:49.872 08:21:23 -- accel/accel.sh@20 -- # read -r var val 00:24:49.872 08:21:23 -- accel/accel.sh@21 -- # val= 00:24:49.872 08:21:23 -- accel/accel.sh@22 -- # case "$var" in 00:24:49.872 08:21:23 -- accel/accel.sh@20 -- # IFS=: 00:24:49.872 08:21:23 -- accel/accel.sh@20 -- # read -r var val 00:24:49.872 08:21:23 -- accel/accel.sh@21 -- # val=decompress 00:24:49.872 08:21:23 -- accel/accel.sh@22 -- # case "$var" in 00:24:49.872 08:21:23 -- accel/accel.sh@24 -- # accel_opc=decompress 00:24:49.872 08:21:23 -- accel/accel.sh@20 -- # IFS=: 00:24:49.872 08:21:23 -- accel/accel.sh@20 -- # read -r var val 00:24:49.872 08:21:23 -- accel/accel.sh@21 -- # val='4096 bytes' 00:24:49.872 08:21:23 -- accel/accel.sh@22 -- # case "$var" in 00:24:49.872 08:21:23 -- accel/accel.sh@20 -- # IFS=: 00:24:49.872 08:21:23 -- accel/accel.sh@20 -- # read -r var val 00:24:49.872 08:21:23 -- accel/accel.sh@21 -- # val= 00:24:49.872 08:21:23 -- accel/accel.sh@22 -- # case "$var" in 00:24:49.872 08:21:23 -- accel/accel.sh@20 -- # IFS=: 00:24:49.872 08:21:23 -- accel/accel.sh@20 -- # read -r var val 00:24:49.872 08:21:23 -- accel/accel.sh@21 -- # val=software 00:24:49.872 08:21:23 -- accel/accel.sh@22 -- # case "$var" in 00:24:49.872 08:21:23 -- accel/accel.sh@23 -- # accel_module=software 00:24:49.872 08:21:23 -- accel/accel.sh@20 -- # IFS=: 00:24:49.872 08:21:23 -- accel/accel.sh@20 -- # read -r var val 00:24:49.872 08:21:23 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:24:49.872 08:21:23 -- accel/accel.sh@22 -- # case "$var" in 00:24:49.872 08:21:23 -- accel/accel.sh@20 -- # IFS=: 00:24:49.872 08:21:23 -- accel/accel.sh@20 -- # read -r var val 00:24:49.872 08:21:23 -- accel/accel.sh@21 -- # val=32 00:24:49.872 08:21:23 -- accel/accel.sh@22 -- # case "$var" in 00:24:49.872 08:21:23 -- accel/accel.sh@20 -- # IFS=: 00:24:49.872 08:21:23 -- accel/accel.sh@20 -- # read -r var val 00:24:49.872 08:21:23 -- accel/accel.sh@21 -- # val=32 00:24:49.872 08:21:23 -- accel/accel.sh@22 -- # case "$var" in 00:24:49.872 08:21:23 -- accel/accel.sh@20 -- # IFS=: 00:24:49.872 08:21:23 -- accel/accel.sh@20 -- # read -r var val 00:24:49.872 08:21:23 -- accel/accel.sh@21 -- # val=1 00:24:49.872 08:21:23 -- accel/accel.sh@22 -- # case "$var" in 00:24:49.872 08:21:23 -- accel/accel.sh@20 -- # IFS=: 00:24:49.872 08:21:23 -- accel/accel.sh@20 -- # read -r var val 00:24:49.872 08:21:23 -- accel/accel.sh@21 -- # val='1 seconds' 00:24:49.872 08:21:23 -- accel/accel.sh@22 -- # case "$var" in 00:24:49.872 08:21:23 -- accel/accel.sh@20 -- # IFS=: 00:24:49.872 08:21:23 -- accel/accel.sh@20 -- # read -r var val 00:24:49.872 08:21:23 -- accel/accel.sh@21 -- # val=Yes 00:24:49.872 08:21:23 -- accel/accel.sh@22 -- # case "$var" in 00:24:49.872 08:21:23 -- accel/accel.sh@20 -- # IFS=: 00:24:49.872 08:21:23 -- accel/accel.sh@20 -- # read -r var val 00:24:49.872 08:21:23 -- accel/accel.sh@21 -- # val= 00:24:49.872 08:21:23 -- accel/accel.sh@22 -- # case "$var" in 00:24:49.872 08:21:23 -- accel/accel.sh@20 -- # IFS=: 00:24:49.872 08:21:23 -- accel/accel.sh@20 -- # read -r var val 00:24:49.872 08:21:23 -- accel/accel.sh@21 -- # val= 00:24:49.872 08:21:23 -- accel/accel.sh@22 -- # case "$var" in 00:24:49.872 08:21:23 -- accel/accel.sh@20 -- # IFS=: 00:24:49.872 08:21:23 -- accel/accel.sh@20 -- # read -r var val 00:24:51.269 08:21:24 -- accel/accel.sh@21 -- # val= 00:24:51.269 08:21:24 -- accel/accel.sh@22 -- # case "$var" in 00:24:51.269 08:21:24 -- accel/accel.sh@20 -- # IFS=: 00:24:51.269 08:21:24 -- accel/accel.sh@20 -- # read -r var val 00:24:51.269 08:21:24 -- accel/accel.sh@21 -- # val= 00:24:51.269 08:21:24 -- accel/accel.sh@22 -- # case "$var" in 00:24:51.269 08:21:24 -- accel/accel.sh@20 -- # IFS=: 00:24:51.269 08:21:24 -- accel/accel.sh@20 -- # read -r var val 00:24:51.269 08:21:24 -- accel/accel.sh@21 -- # val= 00:24:51.269 08:21:24 -- accel/accel.sh@22 -- # case "$var" in 00:24:51.269 08:21:24 -- accel/accel.sh@20 -- # IFS=: 00:24:51.269 08:21:24 -- accel/accel.sh@20 -- # read -r var val 00:24:51.269 08:21:24 -- accel/accel.sh@21 -- # val= 00:24:51.269 08:21:24 -- accel/accel.sh@22 -- # case "$var" in 00:24:51.269 08:21:24 -- accel/accel.sh@20 -- # IFS=: 00:24:51.269 08:21:24 -- accel/accel.sh@20 -- # read -r var val 00:24:51.269 08:21:24 -- accel/accel.sh@21 -- # val= 00:24:51.269 08:21:24 -- accel/accel.sh@22 -- # case "$var" in 00:24:51.269 08:21:24 -- accel/accel.sh@20 -- # IFS=: 00:24:51.269 08:21:24 -- accel/accel.sh@20 -- # read -r var val 00:24:51.269 08:21:24 -- accel/accel.sh@21 -- # val= 00:24:51.269 08:21:24 -- accel/accel.sh@22 -- # case "$var" in 00:24:51.269 08:21:24 -- accel/accel.sh@20 -- # IFS=: 00:24:51.269 08:21:24 -- accel/accel.sh@20 -- # read -r var val 00:24:51.269 08:21:24 -- accel/accel.sh@28 -- # [[ -n software ]] 00:24:51.269 08:21:24 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:24:51.269 08:21:24 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:51.269 00:24:51.269 real 0m2.951s 00:24:51.269 user 0m2.564s 00:24:51.269 sys 0m0.190s 00:24:51.269 08:21:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:51.269 08:21:24 -- common/autotest_common.sh@10 -- # set +x 00:24:51.269 ************************************ 00:24:51.269 END TEST accel_decomp 00:24:51.269 ************************************ 00:24:51.269 08:21:24 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:24:51.269 08:21:24 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:24:51.269 08:21:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:51.269 08:21:24 -- common/autotest_common.sh@10 -- # set +x 00:24:51.269 ************************************ 00:24:51.269 START TEST accel_decmop_full 00:24:51.269 ************************************ 00:24:51.269 08:21:24 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:24:51.269 08:21:24 -- accel/accel.sh@16 -- # local accel_opc 00:24:51.269 08:21:24 -- accel/accel.sh@17 -- # local accel_module 00:24:51.269 08:21:24 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:24:51.269 08:21:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:24:51.269 08:21:24 -- accel/accel.sh@12 -- # build_accel_config 00:24:51.269 08:21:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:24:51.269 08:21:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:51.269 08:21:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:51.269 08:21:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:24:51.269 08:21:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:24:51.269 08:21:24 -- accel/accel.sh@41 -- # local IFS=, 00:24:51.269 08:21:24 -- accel/accel.sh@42 -- # jq -r . 00:24:51.269 [2024-04-17 08:21:24.388019] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:24:51.269 [2024-04-17 08:21:24.388109] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57171 ] 00:24:51.269 [2024-04-17 08:21:24.527801] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:51.527 [2024-04-17 08:21:24.631540] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:52.905 08:21:25 -- accel/accel.sh@18 -- # out='Preparing input file... 00:24:52.905 00:24:52.905 SPDK Configuration: 00:24:52.905 Core mask: 0x1 00:24:52.905 00:24:52.905 Accel Perf Configuration: 00:24:52.905 Workload Type: decompress 00:24:52.905 Transfer size: 111250 bytes 00:24:52.905 Vector count 1 00:24:52.905 Module: software 00:24:52.905 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:24:52.905 Queue depth: 32 00:24:52.905 Allocate depth: 32 00:24:52.905 # threads/core: 1 00:24:52.905 Run time: 1 seconds 00:24:52.905 Verify: Yes 00:24:52.905 00:24:52.905 Running for 1 seconds... 00:24:52.905 00:24:52.905 Core,Thread Transfers Bandwidth Failed Miscompares 00:24:52.905 ------------------------------------------------------------------------------------ 00:24:52.906 0,0 3744/s 154 MiB/s 0 0 00:24:52.906 ==================================================================================== 00:24:52.906 Total 3744/s 397 MiB/s 0 0' 00:24:52.906 08:21:25 -- accel/accel.sh@20 -- # IFS=: 00:24:52.906 08:21:25 -- accel/accel.sh@20 -- # read -r var val 00:24:52.906 08:21:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:24:52.906 08:21:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:24:52.906 08:21:25 -- accel/accel.sh@12 -- # build_accel_config 00:24:52.906 08:21:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:24:52.906 08:21:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:52.906 08:21:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:52.906 08:21:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:24:52.906 08:21:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:24:52.906 08:21:25 -- accel/accel.sh@41 -- # local IFS=, 00:24:52.906 08:21:25 -- accel/accel.sh@42 -- # jq -r . 00:24:52.906 [2024-04-17 08:21:25.868233] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:24:52.906 [2024-04-17 08:21:25.868350] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57185 ] 00:24:52.906 [2024-04-17 08:21:26.006989] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:52.906 [2024-04-17 08:21:26.100170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:52.906 08:21:26 -- accel/accel.sh@21 -- # val= 00:24:52.906 08:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:24:52.906 08:21:26 -- accel/accel.sh@20 -- # IFS=: 00:24:52.906 08:21:26 -- accel/accel.sh@20 -- # read -r var val 00:24:52.906 08:21:26 -- accel/accel.sh@21 -- # val= 00:24:52.906 08:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:24:52.906 08:21:26 -- accel/accel.sh@20 -- # IFS=: 00:24:52.906 08:21:26 -- accel/accel.sh@20 -- # read -r var val 00:24:52.906 08:21:26 -- accel/accel.sh@21 -- # val= 00:24:52.906 08:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:24:52.906 08:21:26 -- accel/accel.sh@20 -- # IFS=: 00:24:52.906 08:21:26 -- accel/accel.sh@20 -- # read -r var val 00:24:52.906 08:21:26 -- accel/accel.sh@21 -- # val=0x1 00:24:52.906 08:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:24:52.906 08:21:26 -- accel/accel.sh@20 -- # IFS=: 00:24:52.906 08:21:26 -- accel/accel.sh@20 -- # read -r var val 00:24:52.906 08:21:26 -- accel/accel.sh@21 -- # val= 00:24:52.906 08:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:24:52.906 08:21:26 -- accel/accel.sh@20 -- # IFS=: 00:24:52.906 08:21:26 -- accel/accel.sh@20 -- # read -r var val 00:24:52.906 08:21:26 -- accel/accel.sh@21 -- # val= 00:24:52.906 08:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:24:52.906 08:21:26 -- accel/accel.sh@20 -- # IFS=: 00:24:52.906 08:21:26 -- accel/accel.sh@20 -- # read -r var val 00:24:52.906 08:21:26 -- accel/accel.sh@21 -- # val=decompress 00:24:52.906 08:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:24:52.906 08:21:26 -- accel/accel.sh@24 -- # accel_opc=decompress 00:24:52.906 08:21:26 -- accel/accel.sh@20 -- # IFS=: 00:24:52.906 08:21:26 -- accel/accel.sh@20 -- # read -r var val 00:24:52.906 08:21:26 -- accel/accel.sh@21 -- # val='111250 bytes' 00:24:52.906 08:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:24:52.906 08:21:26 -- accel/accel.sh@20 -- # IFS=: 00:24:52.906 08:21:26 -- accel/accel.sh@20 -- # read -r var val 00:24:52.906 08:21:26 -- accel/accel.sh@21 -- # val= 00:24:52.906 08:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:24:52.906 08:21:26 -- accel/accel.sh@20 -- # IFS=: 00:24:52.906 08:21:26 -- accel/accel.sh@20 -- # read -r var val 00:24:52.906 08:21:26 -- accel/accel.sh@21 -- # val=software 00:24:52.906 08:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:24:52.906 08:21:26 -- accel/accel.sh@23 -- # accel_module=software 00:24:52.906 08:21:26 -- accel/accel.sh@20 -- # IFS=: 00:24:52.906 08:21:26 -- accel/accel.sh@20 -- # read -r var val 00:24:52.906 08:21:26 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:24:52.906 08:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:24:52.906 08:21:26 -- accel/accel.sh@20 -- # IFS=: 00:24:52.906 08:21:26 -- accel/accel.sh@20 -- # read -r var val 00:24:52.906 08:21:26 -- accel/accel.sh@21 -- # val=32 00:24:52.906 08:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:24:52.906 08:21:26 -- accel/accel.sh@20 -- # IFS=: 00:24:52.906 08:21:26 -- accel/accel.sh@20 -- # read -r var val 00:24:52.906 08:21:26 -- accel/accel.sh@21 -- # val=32 00:24:52.906 08:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:24:52.906 08:21:26 -- accel/accel.sh@20 -- # IFS=: 00:24:52.906 08:21:26 -- accel/accel.sh@20 -- # read -r var val 00:24:52.906 08:21:26 -- accel/accel.sh@21 -- # val=1 00:24:52.906 08:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:24:52.906 08:21:26 -- accel/accel.sh@20 -- # IFS=: 00:24:52.906 08:21:26 -- accel/accel.sh@20 -- # read -r var val 00:24:52.906 08:21:26 -- accel/accel.sh@21 -- # val='1 seconds' 00:24:52.906 08:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:24:52.906 08:21:26 -- accel/accel.sh@20 -- # IFS=: 00:24:52.906 08:21:26 -- accel/accel.sh@20 -- # read -r var val 00:24:52.906 08:21:26 -- accel/accel.sh@21 -- # val=Yes 00:24:52.906 08:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:24:52.906 08:21:26 -- accel/accel.sh@20 -- # IFS=: 00:24:52.906 08:21:26 -- accel/accel.sh@20 -- # read -r var val 00:24:52.906 08:21:26 -- accel/accel.sh@21 -- # val= 00:24:52.906 08:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:24:52.906 08:21:26 -- accel/accel.sh@20 -- # IFS=: 00:24:52.906 08:21:26 -- accel/accel.sh@20 -- # read -r var val 00:24:52.906 08:21:26 -- accel/accel.sh@21 -- # val= 00:24:52.906 08:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:24:52.906 08:21:26 -- accel/accel.sh@20 -- # IFS=: 00:24:52.906 08:21:26 -- accel/accel.sh@20 -- # read -r var val 00:24:54.284 08:21:27 -- accel/accel.sh@21 -- # val= 00:24:54.284 08:21:27 -- accel/accel.sh@22 -- # case "$var" in 00:24:54.284 08:21:27 -- accel/accel.sh@20 -- # IFS=: 00:24:54.284 08:21:27 -- accel/accel.sh@20 -- # read -r var val 00:24:54.284 08:21:27 -- accel/accel.sh@21 -- # val= 00:24:54.284 08:21:27 -- accel/accel.sh@22 -- # case "$var" in 00:24:54.284 08:21:27 -- accel/accel.sh@20 -- # IFS=: 00:24:54.284 08:21:27 -- accel/accel.sh@20 -- # read -r var val 00:24:54.284 08:21:27 -- accel/accel.sh@21 -- # val= 00:24:54.284 08:21:27 -- accel/accel.sh@22 -- # case "$var" in 00:24:54.284 08:21:27 -- accel/accel.sh@20 -- # IFS=: 00:24:54.284 08:21:27 -- accel/accel.sh@20 -- # read -r var val 00:24:54.284 08:21:27 -- accel/accel.sh@21 -- # val= 00:24:54.284 08:21:27 -- accel/accel.sh@22 -- # case "$var" in 00:24:54.284 08:21:27 -- accel/accel.sh@20 -- # IFS=: 00:24:54.284 08:21:27 -- accel/accel.sh@20 -- # read -r var val 00:24:54.284 08:21:27 -- accel/accel.sh@21 -- # val= 00:24:54.284 08:21:27 -- accel/accel.sh@22 -- # case "$var" in 00:24:54.284 08:21:27 -- accel/accel.sh@20 -- # IFS=: 00:24:54.284 08:21:27 -- accel/accel.sh@20 -- # read -r var val 00:24:54.284 08:21:27 -- accel/accel.sh@21 -- # val= 00:24:54.284 08:21:27 -- accel/accel.sh@22 -- # case "$var" in 00:24:54.284 08:21:27 -- accel/accel.sh@20 -- # IFS=: 00:24:54.284 08:21:27 -- accel/accel.sh@20 -- # read -r var val 00:24:54.284 08:21:27 -- accel/accel.sh@28 -- # [[ -n software ]] 00:24:54.284 ************************************ 00:24:54.284 END TEST accel_decmop_full 00:24:54.284 ************************************ 00:24:54.284 08:21:27 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:24:54.284 08:21:27 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:54.284 00:24:54.284 real 0m2.969s 00:24:54.284 user 0m2.577s 00:24:54.284 sys 0m0.195s 00:24:54.284 08:21:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:54.284 08:21:27 -- common/autotest_common.sh@10 -- # set +x 00:24:54.284 08:21:27 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:24:54.284 08:21:27 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:24:54.284 08:21:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:54.284 08:21:27 -- common/autotest_common.sh@10 -- # set +x 00:24:54.284 ************************************ 00:24:54.284 START TEST accel_decomp_mcore 00:24:54.284 ************************************ 00:24:54.284 08:21:27 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:24:54.284 08:21:27 -- accel/accel.sh@16 -- # local accel_opc 00:24:54.284 08:21:27 -- accel/accel.sh@17 -- # local accel_module 00:24:54.284 08:21:27 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:24:54.284 08:21:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:24:54.284 08:21:27 -- accel/accel.sh@12 -- # build_accel_config 00:24:54.284 08:21:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:24:54.284 08:21:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:54.284 08:21:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:54.284 08:21:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:24:54.284 08:21:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:24:54.284 08:21:27 -- accel/accel.sh@41 -- # local IFS=, 00:24:54.284 08:21:27 -- accel/accel.sh@42 -- # jq -r . 00:24:54.284 [2024-04-17 08:21:27.417331] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:24:54.284 [2024-04-17 08:21:27.417534] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57220 ] 00:24:54.284 [2024-04-17 08:21:27.558410] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:54.542 [2024-04-17 08:21:27.662477] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:54.542 [2024-04-17 08:21:27.662859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:54.542 [2024-04-17 08:21:27.662680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:54.542 [2024-04-17 08:21:27.662865] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:55.924 08:21:28 -- accel/accel.sh@18 -- # out='Preparing input file... 00:24:55.924 00:24:55.924 SPDK Configuration: 00:24:55.924 Core mask: 0xf 00:24:55.924 00:24:55.924 Accel Perf Configuration: 00:24:55.924 Workload Type: decompress 00:24:55.924 Transfer size: 4096 bytes 00:24:55.924 Vector count 1 00:24:55.924 Module: software 00:24:55.924 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:24:55.924 Queue depth: 32 00:24:55.924 Allocate depth: 32 00:24:55.924 # threads/core: 1 00:24:55.924 Run time: 1 seconds 00:24:55.924 Verify: Yes 00:24:55.924 00:24:55.924 Running for 1 seconds... 00:24:55.924 00:24:55.924 Core,Thread Transfers Bandwidth Failed Miscompares 00:24:55.924 ------------------------------------------------------------------------------------ 00:24:55.924 0,0 49024/s 90 MiB/s 0 0 00:24:55.924 3,0 56160/s 103 MiB/s 0 0 00:24:55.924 2,0 56256/s 103 MiB/s 0 0 00:24:55.924 1,0 54752/s 100 MiB/s 0 0 00:24:55.924 ==================================================================================== 00:24:55.924 Total 216192/s 844 MiB/s 0 0' 00:24:55.924 08:21:28 -- accel/accel.sh@20 -- # IFS=: 00:24:55.924 08:21:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:24:55.924 08:21:28 -- accel/accel.sh@20 -- # read -r var val 00:24:55.924 08:21:28 -- accel/accel.sh@12 -- # build_accel_config 00:24:55.924 08:21:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:24:55.924 08:21:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:24:55.924 08:21:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:55.924 08:21:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:55.924 08:21:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:24:55.924 08:21:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:24:55.924 08:21:28 -- accel/accel.sh@41 -- # local IFS=, 00:24:55.924 08:21:28 -- accel/accel.sh@42 -- # jq -r . 00:24:55.924 [2024-04-17 08:21:28.913719] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:24:55.924 [2024-04-17 08:21:28.913914] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57242 ] 00:24:55.924 [2024-04-17 08:21:29.056178] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:55.924 [2024-04-17 08:21:29.162966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:55.924 [2024-04-17 08:21:29.163070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:55.924 [2024-04-17 08:21:29.163173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:55.924 [2024-04-17 08:21:29.163177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:55.924 08:21:29 -- accel/accel.sh@21 -- # val= 00:24:55.924 08:21:29 -- accel/accel.sh@22 -- # case "$var" in 00:24:55.924 08:21:29 -- accel/accel.sh@20 -- # IFS=: 00:24:55.924 08:21:29 -- accel/accel.sh@20 -- # read -r var val 00:24:55.924 08:21:29 -- accel/accel.sh@21 -- # val= 00:24:55.924 08:21:29 -- accel/accel.sh@22 -- # case "$var" in 00:24:55.924 08:21:29 -- accel/accel.sh@20 -- # IFS=: 00:24:55.924 08:21:29 -- accel/accel.sh@20 -- # read -r var val 00:24:55.924 08:21:29 -- accel/accel.sh@21 -- # val= 00:24:55.924 08:21:29 -- accel/accel.sh@22 -- # case "$var" in 00:24:55.924 08:21:29 -- accel/accel.sh@20 -- # IFS=: 00:24:55.924 08:21:29 -- accel/accel.sh@20 -- # read -r var val 00:24:55.924 08:21:29 -- accel/accel.sh@21 -- # val=0xf 00:24:55.924 08:21:29 -- accel/accel.sh@22 -- # case "$var" in 00:24:55.924 08:21:29 -- accel/accel.sh@20 -- # IFS=: 00:24:55.924 08:21:29 -- accel/accel.sh@20 -- # read -r var val 00:24:55.924 08:21:29 -- accel/accel.sh@21 -- # val= 00:24:55.924 08:21:29 -- accel/accel.sh@22 -- # case "$var" in 00:24:55.924 08:21:29 -- accel/accel.sh@20 -- # IFS=: 00:24:55.924 08:21:29 -- accel/accel.sh@20 -- # read -r var val 00:24:55.924 08:21:29 -- accel/accel.sh@21 -- # val= 00:24:55.924 08:21:29 -- accel/accel.sh@22 -- # case "$var" in 00:24:55.924 08:21:29 -- accel/accel.sh@20 -- # IFS=: 00:24:55.924 08:21:29 -- accel/accel.sh@20 -- # read -r var val 00:24:55.924 08:21:29 -- accel/accel.sh@21 -- # val=decompress 00:24:55.924 08:21:29 -- accel/accel.sh@22 -- # case "$var" in 00:24:55.924 08:21:29 -- accel/accel.sh@24 -- # accel_opc=decompress 00:24:55.924 08:21:29 -- accel/accel.sh@20 -- # IFS=: 00:24:55.924 08:21:29 -- accel/accel.sh@20 -- # read -r var val 00:24:55.924 08:21:29 -- accel/accel.sh@21 -- # val='4096 bytes' 00:24:55.924 08:21:29 -- accel/accel.sh@22 -- # case "$var" in 00:24:55.924 08:21:29 -- accel/accel.sh@20 -- # IFS=: 00:24:55.924 08:21:29 -- accel/accel.sh@20 -- # read -r var val 00:24:55.924 08:21:29 -- accel/accel.sh@21 -- # val= 00:24:55.924 08:21:29 -- accel/accel.sh@22 -- # case "$var" in 00:24:55.924 08:21:29 -- accel/accel.sh@20 -- # IFS=: 00:24:55.924 08:21:29 -- accel/accel.sh@20 -- # read -r var val 00:24:55.924 08:21:29 -- accel/accel.sh@21 -- # val=software 00:24:55.924 08:21:29 -- accel/accel.sh@22 -- # case "$var" in 00:24:55.924 08:21:29 -- accel/accel.sh@23 -- # accel_module=software 00:24:55.924 08:21:29 -- accel/accel.sh@20 -- # IFS=: 00:24:55.924 08:21:29 -- accel/accel.sh@20 -- # read -r var val 00:24:55.924 08:21:29 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:24:55.924 08:21:29 -- accel/accel.sh@22 -- # case "$var" in 00:24:55.924 08:21:29 -- accel/accel.sh@20 -- # IFS=: 00:24:55.924 08:21:29 -- accel/accel.sh@20 -- # read -r var val 00:24:55.924 08:21:29 -- accel/accel.sh@21 -- # val=32 00:24:55.924 08:21:29 -- accel/accel.sh@22 -- # case "$var" in 00:24:55.924 08:21:29 -- accel/accel.sh@20 -- # IFS=: 00:24:55.924 08:21:29 -- accel/accel.sh@20 -- # read -r var val 00:24:55.924 08:21:29 -- accel/accel.sh@21 -- # val=32 00:24:55.924 08:21:29 -- accel/accel.sh@22 -- # case "$var" in 00:24:55.924 08:21:29 -- accel/accel.sh@20 -- # IFS=: 00:24:55.924 08:21:29 -- accel/accel.sh@20 -- # read -r var val 00:24:55.924 08:21:29 -- accel/accel.sh@21 -- # val=1 00:24:55.924 08:21:29 -- accel/accel.sh@22 -- # case "$var" in 00:24:55.924 08:21:29 -- accel/accel.sh@20 -- # IFS=: 00:24:55.925 08:21:29 -- accel/accel.sh@20 -- # read -r var val 00:24:55.925 08:21:29 -- accel/accel.sh@21 -- # val='1 seconds' 00:24:55.925 08:21:29 -- accel/accel.sh@22 -- # case "$var" in 00:24:55.925 08:21:29 -- accel/accel.sh@20 -- # IFS=: 00:24:55.925 08:21:29 -- accel/accel.sh@20 -- # read -r var val 00:24:55.925 08:21:29 -- accel/accel.sh@21 -- # val=Yes 00:24:55.925 08:21:29 -- accel/accel.sh@22 -- # case "$var" in 00:24:55.925 08:21:29 -- accel/accel.sh@20 -- # IFS=: 00:24:55.925 08:21:29 -- accel/accel.sh@20 -- # read -r var val 00:24:55.925 08:21:29 -- accel/accel.sh@21 -- # val= 00:24:55.925 08:21:29 -- accel/accel.sh@22 -- # case "$var" in 00:24:55.925 08:21:29 -- accel/accel.sh@20 -- # IFS=: 00:24:55.925 08:21:29 -- accel/accel.sh@20 -- # read -r var val 00:24:55.925 08:21:29 -- accel/accel.sh@21 -- # val= 00:24:55.925 08:21:29 -- accel/accel.sh@22 -- # case "$var" in 00:24:55.925 08:21:29 -- accel/accel.sh@20 -- # IFS=: 00:24:55.925 08:21:29 -- accel/accel.sh@20 -- # read -r var val 00:24:57.303 08:21:30 -- accel/accel.sh@21 -- # val= 00:24:57.303 08:21:30 -- accel/accel.sh@22 -- # case "$var" in 00:24:57.303 08:21:30 -- accel/accel.sh@20 -- # IFS=: 00:24:57.303 08:21:30 -- accel/accel.sh@20 -- # read -r var val 00:24:57.303 08:21:30 -- accel/accel.sh@21 -- # val= 00:24:57.303 08:21:30 -- accel/accel.sh@22 -- # case "$var" in 00:24:57.303 08:21:30 -- accel/accel.sh@20 -- # IFS=: 00:24:57.303 08:21:30 -- accel/accel.sh@20 -- # read -r var val 00:24:57.303 08:21:30 -- accel/accel.sh@21 -- # val= 00:24:57.303 08:21:30 -- accel/accel.sh@22 -- # case "$var" in 00:24:57.303 08:21:30 -- accel/accel.sh@20 -- # IFS=: 00:24:57.303 08:21:30 -- accel/accel.sh@20 -- # read -r var val 00:24:57.303 08:21:30 -- accel/accel.sh@21 -- # val= 00:24:57.303 08:21:30 -- accel/accel.sh@22 -- # case "$var" in 00:24:57.303 08:21:30 -- accel/accel.sh@20 -- # IFS=: 00:24:57.303 08:21:30 -- accel/accel.sh@20 -- # read -r var val 00:24:57.303 08:21:30 -- accel/accel.sh@21 -- # val= 00:24:57.303 08:21:30 -- accel/accel.sh@22 -- # case "$var" in 00:24:57.303 08:21:30 -- accel/accel.sh@20 -- # IFS=: 00:24:57.303 08:21:30 -- accel/accel.sh@20 -- # read -r var val 00:24:57.303 08:21:30 -- accel/accel.sh@21 -- # val= 00:24:57.303 08:21:30 -- accel/accel.sh@22 -- # case "$var" in 00:24:57.303 08:21:30 -- accel/accel.sh@20 -- # IFS=: 00:24:57.303 08:21:30 -- accel/accel.sh@20 -- # read -r var val 00:24:57.303 08:21:30 -- accel/accel.sh@21 -- # val= 00:24:57.303 08:21:30 -- accel/accel.sh@22 -- # case "$var" in 00:24:57.303 08:21:30 -- accel/accel.sh@20 -- # IFS=: 00:24:57.303 08:21:30 -- accel/accel.sh@20 -- # read -r var val 00:24:57.303 08:21:30 -- accel/accel.sh@21 -- # val= 00:24:57.303 08:21:30 -- accel/accel.sh@22 -- # case "$var" in 00:24:57.303 08:21:30 -- accel/accel.sh@20 -- # IFS=: 00:24:57.303 08:21:30 -- accel/accel.sh@20 -- # read -r var val 00:24:57.303 08:21:30 -- accel/accel.sh@21 -- # val= 00:24:57.303 08:21:30 -- accel/accel.sh@22 -- # case "$var" in 00:24:57.303 08:21:30 -- accel/accel.sh@20 -- # IFS=: 00:24:57.303 08:21:30 -- accel/accel.sh@20 -- # read -r var val 00:24:57.303 08:21:30 -- accel/accel.sh@28 -- # [[ -n software ]] 00:24:57.303 08:21:30 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:24:57.303 08:21:30 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:57.303 00:24:57.303 real 0m3.010s 00:24:57.303 user 0m9.250s 00:24:57.303 sys 0m0.212s 00:24:57.303 08:21:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:57.303 08:21:30 -- common/autotest_common.sh@10 -- # set +x 00:24:57.303 ************************************ 00:24:57.303 END TEST accel_decomp_mcore 00:24:57.303 ************************************ 00:24:57.303 08:21:30 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:24:57.303 08:21:30 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:24:57.303 08:21:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:57.303 08:21:30 -- common/autotest_common.sh@10 -- # set +x 00:24:57.303 ************************************ 00:24:57.303 START TEST accel_decomp_full_mcore 00:24:57.303 ************************************ 00:24:57.303 08:21:30 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:24:57.303 08:21:30 -- accel/accel.sh@16 -- # local accel_opc 00:24:57.303 08:21:30 -- accel/accel.sh@17 -- # local accel_module 00:24:57.303 08:21:30 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:24:57.303 08:21:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:24:57.303 08:21:30 -- accel/accel.sh@12 -- # build_accel_config 00:24:57.303 08:21:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:24:57.303 08:21:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:57.303 08:21:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:57.303 08:21:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:24:57.303 08:21:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:24:57.303 08:21:30 -- accel/accel.sh@41 -- # local IFS=, 00:24:57.303 08:21:30 -- accel/accel.sh@42 -- # jq -r . 00:24:57.303 [2024-04-17 08:21:30.485008] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:24:57.303 [2024-04-17 08:21:30.485088] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57280 ] 00:24:57.303 [2024-04-17 08:21:30.621431] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:57.563 [2024-04-17 08:21:30.729253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:57.563 [2024-04-17 08:21:30.729447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:57.563 [2024-04-17 08:21:30.729555] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:57.563 [2024-04-17 08:21:30.729560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:58.949 08:21:31 -- accel/accel.sh@18 -- # out='Preparing input file... 00:24:58.949 00:24:58.949 SPDK Configuration: 00:24:58.949 Core mask: 0xf 00:24:58.949 00:24:58.949 Accel Perf Configuration: 00:24:58.949 Workload Type: decompress 00:24:58.949 Transfer size: 111250 bytes 00:24:58.949 Vector count 1 00:24:58.949 Module: software 00:24:58.949 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:24:58.949 Queue depth: 32 00:24:58.949 Allocate depth: 32 00:24:58.949 # threads/core: 1 00:24:58.949 Run time: 1 seconds 00:24:58.949 Verify: Yes 00:24:58.949 00:24:58.949 Running for 1 seconds... 00:24:58.949 00:24:58.949 Core,Thread Transfers Bandwidth Failed Miscompares 00:24:58.949 ------------------------------------------------------------------------------------ 00:24:58.949 0,0 3552/s 146 MiB/s 0 0 00:24:58.949 3,0 3872/s 159 MiB/s 0 0 00:24:58.949 2,0 4160/s 171 MiB/s 0 0 00:24:58.949 1,0 4160/s 171 MiB/s 0 0 00:24:58.949 ==================================================================================== 00:24:58.949 Total 15744/s 1670 MiB/s 0 0' 00:24:58.949 08:21:31 -- accel/accel.sh@20 -- # IFS=: 00:24:58.949 08:21:31 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:24:58.949 08:21:31 -- accel/accel.sh@20 -- # read -r var val 00:24:58.949 08:21:31 -- accel/accel.sh@12 -- # build_accel_config 00:24:58.949 08:21:31 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:24:58.949 08:21:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:24:58.949 08:21:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:58.949 08:21:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:58.949 08:21:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:24:58.949 08:21:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:24:58.949 08:21:31 -- accel/accel.sh@41 -- # local IFS=, 00:24:58.949 08:21:31 -- accel/accel.sh@42 -- # jq -r . 00:24:58.949 [2024-04-17 08:21:31.991933] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:24:58.949 [2024-04-17 08:21:31.992093] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57302 ] 00:24:58.949 [2024-04-17 08:21:32.128856] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:58.949 [2024-04-17 08:21:32.229872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:58.949 [2024-04-17 08:21:32.230064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:58.949 [2024-04-17 08:21:32.230186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:58.949 [2024-04-17 08:21:32.230191] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:59.218 08:21:32 -- accel/accel.sh@21 -- # val= 00:24:59.218 08:21:32 -- accel/accel.sh@22 -- # case "$var" in 00:24:59.218 08:21:32 -- accel/accel.sh@20 -- # IFS=: 00:24:59.218 08:21:32 -- accel/accel.sh@20 -- # read -r var val 00:24:59.218 08:21:32 -- accel/accel.sh@21 -- # val= 00:24:59.218 08:21:32 -- accel/accel.sh@22 -- # case "$var" in 00:24:59.218 08:21:32 -- accel/accel.sh@20 -- # IFS=: 00:24:59.218 08:21:32 -- accel/accel.sh@20 -- # read -r var val 00:24:59.218 08:21:32 -- accel/accel.sh@21 -- # val= 00:24:59.218 08:21:32 -- accel/accel.sh@22 -- # case "$var" in 00:24:59.218 08:21:32 -- accel/accel.sh@20 -- # IFS=: 00:24:59.218 08:21:32 -- accel/accel.sh@20 -- # read -r var val 00:24:59.218 08:21:32 -- accel/accel.sh@21 -- # val=0xf 00:24:59.219 08:21:32 -- accel/accel.sh@22 -- # case "$var" in 00:24:59.219 08:21:32 -- accel/accel.sh@20 -- # IFS=: 00:24:59.219 08:21:32 -- accel/accel.sh@20 -- # read -r var val 00:24:59.219 08:21:32 -- accel/accel.sh@21 -- # val= 00:24:59.219 08:21:32 -- accel/accel.sh@22 -- # case "$var" in 00:24:59.219 08:21:32 -- accel/accel.sh@20 -- # IFS=: 00:24:59.219 08:21:32 -- accel/accel.sh@20 -- # read -r var val 00:24:59.219 08:21:32 -- accel/accel.sh@21 -- # val= 00:24:59.219 08:21:32 -- accel/accel.sh@22 -- # case "$var" in 00:24:59.219 08:21:32 -- accel/accel.sh@20 -- # IFS=: 00:24:59.219 08:21:32 -- accel/accel.sh@20 -- # read -r var val 00:24:59.219 08:21:32 -- accel/accel.sh@21 -- # val=decompress 00:24:59.219 08:21:32 -- accel/accel.sh@22 -- # case "$var" in 00:24:59.219 08:21:32 -- accel/accel.sh@24 -- # accel_opc=decompress 00:24:59.219 08:21:32 -- accel/accel.sh@20 -- # IFS=: 00:24:59.219 08:21:32 -- accel/accel.sh@20 -- # read -r var val 00:24:59.219 08:21:32 -- accel/accel.sh@21 -- # val='111250 bytes' 00:24:59.219 08:21:32 -- accel/accel.sh@22 -- # case "$var" in 00:24:59.219 08:21:32 -- accel/accel.sh@20 -- # IFS=: 00:24:59.219 08:21:32 -- accel/accel.sh@20 -- # read -r var val 00:24:59.219 08:21:32 -- accel/accel.sh@21 -- # val= 00:24:59.219 08:21:32 -- accel/accel.sh@22 -- # case "$var" in 00:24:59.219 08:21:32 -- accel/accel.sh@20 -- # IFS=: 00:24:59.219 08:21:32 -- accel/accel.sh@20 -- # read -r var val 00:24:59.219 08:21:32 -- accel/accel.sh@21 -- # val=software 00:24:59.219 08:21:32 -- accel/accel.sh@22 -- # case "$var" in 00:24:59.219 08:21:32 -- accel/accel.sh@23 -- # accel_module=software 00:24:59.219 08:21:32 -- accel/accel.sh@20 -- # IFS=: 00:24:59.219 08:21:32 -- accel/accel.sh@20 -- # read -r var val 00:24:59.219 08:21:32 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:24:59.219 08:21:32 -- accel/accel.sh@22 -- # case "$var" in 00:24:59.219 08:21:32 -- accel/accel.sh@20 -- # IFS=: 00:24:59.219 08:21:32 -- accel/accel.sh@20 -- # read -r var val 00:24:59.219 08:21:32 -- accel/accel.sh@21 -- # val=32 00:24:59.219 08:21:32 -- accel/accel.sh@22 -- # case "$var" in 00:24:59.219 08:21:32 -- accel/accel.sh@20 -- # IFS=: 00:24:59.219 08:21:32 -- accel/accel.sh@20 -- # read -r var val 00:24:59.219 08:21:32 -- accel/accel.sh@21 -- # val=32 00:24:59.219 08:21:32 -- accel/accel.sh@22 -- # case "$var" in 00:24:59.219 08:21:32 -- accel/accel.sh@20 -- # IFS=: 00:24:59.219 08:21:32 -- accel/accel.sh@20 -- # read -r var val 00:24:59.219 08:21:32 -- accel/accel.sh@21 -- # val=1 00:24:59.219 08:21:32 -- accel/accel.sh@22 -- # case "$var" in 00:24:59.219 08:21:32 -- accel/accel.sh@20 -- # IFS=: 00:24:59.219 08:21:32 -- accel/accel.sh@20 -- # read -r var val 00:24:59.219 08:21:32 -- accel/accel.sh@21 -- # val='1 seconds' 00:24:59.219 08:21:32 -- accel/accel.sh@22 -- # case "$var" in 00:24:59.219 08:21:32 -- accel/accel.sh@20 -- # IFS=: 00:24:59.219 08:21:32 -- accel/accel.sh@20 -- # read -r var val 00:24:59.219 08:21:32 -- accel/accel.sh@21 -- # val=Yes 00:24:59.219 08:21:32 -- accel/accel.sh@22 -- # case "$var" in 00:24:59.219 08:21:32 -- accel/accel.sh@20 -- # IFS=: 00:24:59.219 08:21:32 -- accel/accel.sh@20 -- # read -r var val 00:24:59.219 08:21:32 -- accel/accel.sh@21 -- # val= 00:24:59.219 08:21:32 -- accel/accel.sh@22 -- # case "$var" in 00:24:59.219 08:21:32 -- accel/accel.sh@20 -- # IFS=: 00:24:59.219 08:21:32 -- accel/accel.sh@20 -- # read -r var val 00:24:59.219 08:21:32 -- accel/accel.sh@21 -- # val= 00:24:59.219 08:21:32 -- accel/accel.sh@22 -- # case "$var" in 00:24:59.219 08:21:32 -- accel/accel.sh@20 -- # IFS=: 00:24:59.219 08:21:32 -- accel/accel.sh@20 -- # read -r var val 00:25:00.157 08:21:33 -- accel/accel.sh@21 -- # val= 00:25:00.157 08:21:33 -- accel/accel.sh@22 -- # case "$var" in 00:25:00.157 08:21:33 -- accel/accel.sh@20 -- # IFS=: 00:25:00.157 08:21:33 -- accel/accel.sh@20 -- # read -r var val 00:25:00.157 08:21:33 -- accel/accel.sh@21 -- # val= 00:25:00.157 08:21:33 -- accel/accel.sh@22 -- # case "$var" in 00:25:00.157 08:21:33 -- accel/accel.sh@20 -- # IFS=: 00:25:00.157 08:21:33 -- accel/accel.sh@20 -- # read -r var val 00:25:00.157 08:21:33 -- accel/accel.sh@21 -- # val= 00:25:00.157 08:21:33 -- accel/accel.sh@22 -- # case "$var" in 00:25:00.157 08:21:33 -- accel/accel.sh@20 -- # IFS=: 00:25:00.157 08:21:33 -- accel/accel.sh@20 -- # read -r var val 00:25:00.157 08:21:33 -- accel/accel.sh@21 -- # val= 00:25:00.157 08:21:33 -- accel/accel.sh@22 -- # case "$var" in 00:25:00.157 08:21:33 -- accel/accel.sh@20 -- # IFS=: 00:25:00.157 08:21:33 -- accel/accel.sh@20 -- # read -r var val 00:25:00.157 08:21:33 -- accel/accel.sh@21 -- # val= 00:25:00.157 08:21:33 -- accel/accel.sh@22 -- # case "$var" in 00:25:00.157 08:21:33 -- accel/accel.sh@20 -- # IFS=: 00:25:00.157 08:21:33 -- accel/accel.sh@20 -- # read -r var val 00:25:00.157 08:21:33 -- accel/accel.sh@21 -- # val= 00:25:00.157 08:21:33 -- accel/accel.sh@22 -- # case "$var" in 00:25:00.157 08:21:33 -- accel/accel.sh@20 -- # IFS=: 00:25:00.157 08:21:33 -- accel/accel.sh@20 -- # read -r var val 00:25:00.157 08:21:33 -- accel/accel.sh@21 -- # val= 00:25:00.157 08:21:33 -- accel/accel.sh@22 -- # case "$var" in 00:25:00.157 08:21:33 -- accel/accel.sh@20 -- # IFS=: 00:25:00.157 08:21:33 -- accel/accel.sh@20 -- # read -r var val 00:25:00.157 08:21:33 -- accel/accel.sh@21 -- # val= 00:25:00.157 08:21:33 -- accel/accel.sh@22 -- # case "$var" in 00:25:00.157 08:21:33 -- accel/accel.sh@20 -- # IFS=: 00:25:00.157 08:21:33 -- accel/accel.sh@20 -- # read -r var val 00:25:00.157 08:21:33 -- accel/accel.sh@21 -- # val= 00:25:00.157 08:21:33 -- accel/accel.sh@22 -- # case "$var" in 00:25:00.157 08:21:33 -- accel/accel.sh@20 -- # IFS=: 00:25:00.157 08:21:33 -- accel/accel.sh@20 -- # read -r var val 00:25:00.157 08:21:33 -- accel/accel.sh@28 -- # [[ -n software ]] 00:25:00.157 08:21:33 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:25:00.157 08:21:33 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:00.157 00:25:00.157 real 0m3.014s 00:25:00.157 user 0m9.311s 00:25:00.157 sys 0m0.225s 00:25:00.157 08:21:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:00.157 08:21:33 -- common/autotest_common.sh@10 -- # set +x 00:25:00.157 ************************************ 00:25:00.157 END TEST accel_decomp_full_mcore 00:25:00.157 ************************************ 00:25:00.416 08:21:33 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:25:00.416 08:21:33 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:25:00.416 08:21:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:00.416 08:21:33 -- common/autotest_common.sh@10 -- # set +x 00:25:00.416 ************************************ 00:25:00.416 START TEST accel_decomp_mthread 00:25:00.416 ************************************ 00:25:00.416 08:21:33 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:25:00.416 08:21:33 -- accel/accel.sh@16 -- # local accel_opc 00:25:00.416 08:21:33 -- accel/accel.sh@17 -- # local accel_module 00:25:00.417 08:21:33 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:25:00.417 08:21:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:25:00.417 08:21:33 -- accel/accel.sh@12 -- # build_accel_config 00:25:00.417 08:21:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:25:00.417 08:21:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:25:00.417 08:21:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:25:00.417 08:21:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:25:00.417 08:21:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:25:00.417 08:21:33 -- accel/accel.sh@41 -- # local IFS=, 00:25:00.417 08:21:33 -- accel/accel.sh@42 -- # jq -r . 00:25:00.417 [2024-04-17 08:21:33.558633] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:00.417 [2024-04-17 08:21:33.558724] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57340 ] 00:25:00.417 [2024-04-17 08:21:33.698769] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:00.676 [2024-04-17 08:21:33.799662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:02.052 08:21:35 -- accel/accel.sh@18 -- # out='Preparing input file... 00:25:02.052 00:25:02.052 SPDK Configuration: 00:25:02.052 Core mask: 0x1 00:25:02.052 00:25:02.052 Accel Perf Configuration: 00:25:02.052 Workload Type: decompress 00:25:02.052 Transfer size: 4096 bytes 00:25:02.052 Vector count 1 00:25:02.052 Module: software 00:25:02.052 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:25:02.052 Queue depth: 32 00:25:02.052 Allocate depth: 32 00:25:02.052 # threads/core: 2 00:25:02.052 Run time: 1 seconds 00:25:02.052 Verify: Yes 00:25:02.052 00:25:02.052 Running for 1 seconds... 00:25:02.052 00:25:02.052 Core,Thread Transfers Bandwidth Failed Miscompares 00:25:02.052 ------------------------------------------------------------------------------------ 00:25:02.052 0,1 29184/s 53 MiB/s 0 0 00:25:02.052 0,0 29088/s 53 MiB/s 0 0 00:25:02.052 ==================================================================================== 00:25:02.052 Total 58272/s 227 MiB/s 0 0' 00:25:02.052 08:21:35 -- accel/accel.sh@20 -- # IFS=: 00:25:02.052 08:21:35 -- accel/accel.sh@20 -- # read -r var val 00:25:02.052 08:21:35 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:25:02.052 08:21:35 -- accel/accel.sh@12 -- # build_accel_config 00:25:02.052 08:21:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:25:02.052 08:21:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:25:02.052 08:21:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:25:02.052 08:21:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:25:02.052 08:21:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:25:02.052 08:21:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:25:02.052 08:21:35 -- accel/accel.sh@41 -- # local IFS=, 00:25:02.052 08:21:35 -- accel/accel.sh@42 -- # jq -r . 00:25:02.052 [2024-04-17 08:21:35.047449] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:02.052 [2024-04-17 08:21:35.047523] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57359 ] 00:25:02.052 [2024-04-17 08:21:35.181568] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:02.052 [2024-04-17 08:21:35.282164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:02.052 08:21:35 -- accel/accel.sh@21 -- # val= 00:25:02.052 08:21:35 -- accel/accel.sh@22 -- # case "$var" in 00:25:02.052 08:21:35 -- accel/accel.sh@20 -- # IFS=: 00:25:02.052 08:21:35 -- accel/accel.sh@20 -- # read -r var val 00:25:02.052 08:21:35 -- accel/accel.sh@21 -- # val= 00:25:02.052 08:21:35 -- accel/accel.sh@22 -- # case "$var" in 00:25:02.052 08:21:35 -- accel/accel.sh@20 -- # IFS=: 00:25:02.052 08:21:35 -- accel/accel.sh@20 -- # read -r var val 00:25:02.052 08:21:35 -- accel/accel.sh@21 -- # val= 00:25:02.052 08:21:35 -- accel/accel.sh@22 -- # case "$var" in 00:25:02.052 08:21:35 -- accel/accel.sh@20 -- # IFS=: 00:25:02.052 08:21:35 -- accel/accel.sh@20 -- # read -r var val 00:25:02.052 08:21:35 -- accel/accel.sh@21 -- # val=0x1 00:25:02.052 08:21:35 -- accel/accel.sh@22 -- # case "$var" in 00:25:02.052 08:21:35 -- accel/accel.sh@20 -- # IFS=: 00:25:02.052 08:21:35 -- accel/accel.sh@20 -- # read -r var val 00:25:02.052 08:21:35 -- accel/accel.sh@21 -- # val= 00:25:02.052 08:21:35 -- accel/accel.sh@22 -- # case "$var" in 00:25:02.052 08:21:35 -- accel/accel.sh@20 -- # IFS=: 00:25:02.052 08:21:35 -- accel/accel.sh@20 -- # read -r var val 00:25:02.052 08:21:35 -- accel/accel.sh@21 -- # val= 00:25:02.052 08:21:35 -- accel/accel.sh@22 -- # case "$var" in 00:25:02.052 08:21:35 -- accel/accel.sh@20 -- # IFS=: 00:25:02.052 08:21:35 -- accel/accel.sh@20 -- # read -r var val 00:25:02.052 08:21:35 -- accel/accel.sh@21 -- # val=decompress 00:25:02.052 08:21:35 -- accel/accel.sh@22 -- # case "$var" in 00:25:02.052 08:21:35 -- accel/accel.sh@24 -- # accel_opc=decompress 00:25:02.052 08:21:35 -- accel/accel.sh@20 -- # IFS=: 00:25:02.052 08:21:35 -- accel/accel.sh@20 -- # read -r var val 00:25:02.052 08:21:35 -- accel/accel.sh@21 -- # val='4096 bytes' 00:25:02.052 08:21:35 -- accel/accel.sh@22 -- # case "$var" in 00:25:02.052 08:21:35 -- accel/accel.sh@20 -- # IFS=: 00:25:02.052 08:21:35 -- accel/accel.sh@20 -- # read -r var val 00:25:02.052 08:21:35 -- accel/accel.sh@21 -- # val= 00:25:02.052 08:21:35 -- accel/accel.sh@22 -- # case "$var" in 00:25:02.052 08:21:35 -- accel/accel.sh@20 -- # IFS=: 00:25:02.052 08:21:35 -- accel/accel.sh@20 -- # read -r var val 00:25:02.052 08:21:35 -- accel/accel.sh@21 -- # val=software 00:25:02.052 08:21:35 -- accel/accel.sh@22 -- # case "$var" in 00:25:02.052 08:21:35 -- accel/accel.sh@23 -- # accel_module=software 00:25:02.052 08:21:35 -- accel/accel.sh@20 -- # IFS=: 00:25:02.052 08:21:35 -- accel/accel.sh@20 -- # read -r var val 00:25:02.052 08:21:35 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:25:02.052 08:21:35 -- accel/accel.sh@22 -- # case "$var" in 00:25:02.052 08:21:35 -- accel/accel.sh@20 -- # IFS=: 00:25:02.052 08:21:35 -- accel/accel.sh@20 -- # read -r var val 00:25:02.052 08:21:35 -- accel/accel.sh@21 -- # val=32 00:25:02.052 08:21:35 -- accel/accel.sh@22 -- # case "$var" in 00:25:02.052 08:21:35 -- accel/accel.sh@20 -- # IFS=: 00:25:02.052 08:21:35 -- accel/accel.sh@20 -- # read -r var val 00:25:02.052 08:21:35 -- accel/accel.sh@21 -- # val=32 00:25:02.052 08:21:35 -- accel/accel.sh@22 -- # case "$var" in 00:25:02.052 08:21:35 -- accel/accel.sh@20 -- # IFS=: 00:25:02.052 08:21:35 -- accel/accel.sh@20 -- # read -r var val 00:25:02.052 08:21:35 -- accel/accel.sh@21 -- # val=2 00:25:02.052 08:21:35 -- accel/accel.sh@22 -- # case "$var" in 00:25:02.052 08:21:35 -- accel/accel.sh@20 -- # IFS=: 00:25:02.052 08:21:35 -- accel/accel.sh@20 -- # read -r var val 00:25:02.052 08:21:35 -- accel/accel.sh@21 -- # val='1 seconds' 00:25:02.052 08:21:35 -- accel/accel.sh@22 -- # case "$var" in 00:25:02.052 08:21:35 -- accel/accel.sh@20 -- # IFS=: 00:25:02.052 08:21:35 -- accel/accel.sh@20 -- # read -r var val 00:25:02.052 08:21:35 -- accel/accel.sh@21 -- # val=Yes 00:25:02.052 08:21:35 -- accel/accel.sh@22 -- # case "$var" in 00:25:02.052 08:21:35 -- accel/accel.sh@20 -- # IFS=: 00:25:02.052 08:21:35 -- accel/accel.sh@20 -- # read -r var val 00:25:02.052 08:21:35 -- accel/accel.sh@21 -- # val= 00:25:02.052 08:21:35 -- accel/accel.sh@22 -- # case "$var" in 00:25:02.052 08:21:35 -- accel/accel.sh@20 -- # IFS=: 00:25:02.052 08:21:35 -- accel/accel.sh@20 -- # read -r var val 00:25:02.052 08:21:35 -- accel/accel.sh@21 -- # val= 00:25:02.052 08:21:35 -- accel/accel.sh@22 -- # case "$var" in 00:25:02.052 08:21:35 -- accel/accel.sh@20 -- # IFS=: 00:25:02.052 08:21:35 -- accel/accel.sh@20 -- # read -r var val 00:25:03.442 08:21:36 -- accel/accel.sh@21 -- # val= 00:25:03.442 08:21:36 -- accel/accel.sh@22 -- # case "$var" in 00:25:03.442 08:21:36 -- accel/accel.sh@20 -- # IFS=: 00:25:03.442 08:21:36 -- accel/accel.sh@20 -- # read -r var val 00:25:03.442 08:21:36 -- accel/accel.sh@21 -- # val= 00:25:03.442 08:21:36 -- accel/accel.sh@22 -- # case "$var" in 00:25:03.442 08:21:36 -- accel/accel.sh@20 -- # IFS=: 00:25:03.442 08:21:36 -- accel/accel.sh@20 -- # read -r var val 00:25:03.442 08:21:36 -- accel/accel.sh@21 -- # val= 00:25:03.442 08:21:36 -- accel/accel.sh@22 -- # case "$var" in 00:25:03.442 08:21:36 -- accel/accel.sh@20 -- # IFS=: 00:25:03.442 08:21:36 -- accel/accel.sh@20 -- # read -r var val 00:25:03.442 08:21:36 -- accel/accel.sh@21 -- # val= 00:25:03.442 08:21:36 -- accel/accel.sh@22 -- # case "$var" in 00:25:03.443 08:21:36 -- accel/accel.sh@20 -- # IFS=: 00:25:03.443 08:21:36 -- accel/accel.sh@20 -- # read -r var val 00:25:03.443 08:21:36 -- accel/accel.sh@21 -- # val= 00:25:03.443 08:21:36 -- accel/accel.sh@22 -- # case "$var" in 00:25:03.443 08:21:36 -- accel/accel.sh@20 -- # IFS=: 00:25:03.443 08:21:36 -- accel/accel.sh@20 -- # read -r var val 00:25:03.443 08:21:36 -- accel/accel.sh@21 -- # val= 00:25:03.443 08:21:36 -- accel/accel.sh@22 -- # case "$var" in 00:25:03.443 08:21:36 -- accel/accel.sh@20 -- # IFS=: 00:25:03.443 08:21:36 -- accel/accel.sh@20 -- # read -r var val 00:25:03.443 08:21:36 -- accel/accel.sh@21 -- # val= 00:25:03.443 08:21:36 -- accel/accel.sh@22 -- # case "$var" in 00:25:03.443 08:21:36 -- accel/accel.sh@20 -- # IFS=: 00:25:03.443 08:21:36 -- accel/accel.sh@20 -- # read -r var val 00:25:03.443 08:21:36 -- accel/accel.sh@28 -- # [[ -n software ]] 00:25:03.443 08:21:36 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:25:03.443 ************************************ 00:25:03.443 END TEST accel_decomp_mthread 00:25:03.443 ************************************ 00:25:03.443 08:21:36 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:03.443 00:25:03.443 real 0m2.978s 00:25:03.443 user 0m2.579s 00:25:03.443 sys 0m0.200s 00:25:03.443 08:21:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:03.443 08:21:36 -- common/autotest_common.sh@10 -- # set +x 00:25:03.443 08:21:36 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:25:03.443 08:21:36 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:25:03.443 08:21:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:03.443 08:21:36 -- common/autotest_common.sh@10 -- # set +x 00:25:03.443 ************************************ 00:25:03.443 START TEST accel_deomp_full_mthread 00:25:03.443 ************************************ 00:25:03.443 08:21:36 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:25:03.443 08:21:36 -- accel/accel.sh@16 -- # local accel_opc 00:25:03.443 08:21:36 -- accel/accel.sh@17 -- # local accel_module 00:25:03.443 08:21:36 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:25:03.443 08:21:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:25:03.443 08:21:36 -- accel/accel.sh@12 -- # build_accel_config 00:25:03.443 08:21:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:25:03.443 08:21:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:25:03.443 08:21:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:25:03.443 08:21:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:25:03.443 08:21:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:25:03.443 08:21:36 -- accel/accel.sh@41 -- # local IFS=, 00:25:03.443 08:21:36 -- accel/accel.sh@42 -- # jq -r . 00:25:03.443 [2024-04-17 08:21:36.588121] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:03.443 [2024-04-17 08:21:36.588258] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57394 ] 00:25:03.443 [2024-04-17 08:21:36.729262] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:03.702 [2024-04-17 08:21:36.829586] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:05.078 08:21:38 -- accel/accel.sh@18 -- # out='Preparing input file... 00:25:05.078 00:25:05.078 SPDK Configuration: 00:25:05.078 Core mask: 0x1 00:25:05.078 00:25:05.078 Accel Perf Configuration: 00:25:05.078 Workload Type: decompress 00:25:05.078 Transfer size: 111250 bytes 00:25:05.078 Vector count 1 00:25:05.078 Module: software 00:25:05.078 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:25:05.078 Queue depth: 32 00:25:05.078 Allocate depth: 32 00:25:05.078 # threads/core: 2 00:25:05.078 Run time: 1 seconds 00:25:05.078 Verify: Yes 00:25:05.078 00:25:05.078 Running for 1 seconds... 00:25:05.078 00:25:05.078 Core,Thread Transfers Bandwidth Failed Miscompares 00:25:05.078 ------------------------------------------------------------------------------------ 00:25:05.078 0,1 1920/s 79 MiB/s 0 0 00:25:05.078 0,0 1920/s 79 MiB/s 0 0 00:25:05.078 ==================================================================================== 00:25:05.078 Total 3840/s 407 MiB/s 0 0' 00:25:05.078 08:21:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:25:05.078 08:21:38 -- accel/accel.sh@20 -- # IFS=: 00:25:05.078 08:21:38 -- accel/accel.sh@20 -- # read -r var val 00:25:05.078 08:21:38 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:25:05.078 08:21:38 -- accel/accel.sh@12 -- # build_accel_config 00:25:05.078 08:21:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:25:05.078 08:21:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:25:05.078 08:21:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:25:05.078 08:21:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:25:05.078 08:21:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:25:05.078 08:21:38 -- accel/accel.sh@41 -- # local IFS=, 00:25:05.078 08:21:38 -- accel/accel.sh@42 -- # jq -r . 00:25:05.078 [2024-04-17 08:21:38.107633] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:05.078 [2024-04-17 08:21:38.107721] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57408 ] 00:25:05.078 [2024-04-17 08:21:38.247489] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:05.078 [2024-04-17 08:21:38.347654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:05.078 08:21:38 -- accel/accel.sh@21 -- # val= 00:25:05.078 08:21:38 -- accel/accel.sh@22 -- # case "$var" in 00:25:05.078 08:21:38 -- accel/accel.sh@20 -- # IFS=: 00:25:05.078 08:21:38 -- accel/accel.sh@20 -- # read -r var val 00:25:05.078 08:21:38 -- accel/accel.sh@21 -- # val= 00:25:05.078 08:21:38 -- accel/accel.sh@22 -- # case "$var" in 00:25:05.078 08:21:38 -- accel/accel.sh@20 -- # IFS=: 00:25:05.078 08:21:38 -- accel/accel.sh@20 -- # read -r var val 00:25:05.078 08:21:38 -- accel/accel.sh@21 -- # val= 00:25:05.079 08:21:38 -- accel/accel.sh@22 -- # case "$var" in 00:25:05.079 08:21:38 -- accel/accel.sh@20 -- # IFS=: 00:25:05.079 08:21:38 -- accel/accel.sh@20 -- # read -r var val 00:25:05.079 08:21:38 -- accel/accel.sh@21 -- # val=0x1 00:25:05.079 08:21:38 -- accel/accel.sh@22 -- # case "$var" in 00:25:05.079 08:21:38 -- accel/accel.sh@20 -- # IFS=: 00:25:05.079 08:21:38 -- accel/accel.sh@20 -- # read -r var val 00:25:05.079 08:21:38 -- accel/accel.sh@21 -- # val= 00:25:05.079 08:21:38 -- accel/accel.sh@22 -- # case "$var" in 00:25:05.079 08:21:38 -- accel/accel.sh@20 -- # IFS=: 00:25:05.079 08:21:38 -- accel/accel.sh@20 -- # read -r var val 00:25:05.079 08:21:38 -- accel/accel.sh@21 -- # val= 00:25:05.079 08:21:38 -- accel/accel.sh@22 -- # case "$var" in 00:25:05.079 08:21:38 -- accel/accel.sh@20 -- # IFS=: 00:25:05.079 08:21:38 -- accel/accel.sh@20 -- # read -r var val 00:25:05.079 08:21:38 -- accel/accel.sh@21 -- # val=decompress 00:25:05.079 08:21:38 -- accel/accel.sh@22 -- # case "$var" in 00:25:05.079 08:21:38 -- accel/accel.sh@24 -- # accel_opc=decompress 00:25:05.079 08:21:38 -- accel/accel.sh@20 -- # IFS=: 00:25:05.079 08:21:38 -- accel/accel.sh@20 -- # read -r var val 00:25:05.079 08:21:38 -- accel/accel.sh@21 -- # val='111250 bytes' 00:25:05.079 08:21:38 -- accel/accel.sh@22 -- # case "$var" in 00:25:05.079 08:21:38 -- accel/accel.sh@20 -- # IFS=: 00:25:05.079 08:21:38 -- accel/accel.sh@20 -- # read -r var val 00:25:05.079 08:21:38 -- accel/accel.sh@21 -- # val= 00:25:05.079 08:21:38 -- accel/accel.sh@22 -- # case "$var" in 00:25:05.079 08:21:38 -- accel/accel.sh@20 -- # IFS=: 00:25:05.079 08:21:38 -- accel/accel.sh@20 -- # read -r var val 00:25:05.079 08:21:38 -- accel/accel.sh@21 -- # val=software 00:25:05.079 08:21:38 -- accel/accel.sh@22 -- # case "$var" in 00:25:05.079 08:21:38 -- accel/accel.sh@23 -- # accel_module=software 00:25:05.079 08:21:38 -- accel/accel.sh@20 -- # IFS=: 00:25:05.079 08:21:38 -- accel/accel.sh@20 -- # read -r var val 00:25:05.079 08:21:38 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:25:05.079 08:21:38 -- accel/accel.sh@22 -- # case "$var" in 00:25:05.079 08:21:38 -- accel/accel.sh@20 -- # IFS=: 00:25:05.079 08:21:38 -- accel/accel.sh@20 -- # read -r var val 00:25:05.079 08:21:38 -- accel/accel.sh@21 -- # val=32 00:25:05.079 08:21:38 -- accel/accel.sh@22 -- # case "$var" in 00:25:05.079 08:21:38 -- accel/accel.sh@20 -- # IFS=: 00:25:05.079 08:21:38 -- accel/accel.sh@20 -- # read -r var val 00:25:05.079 08:21:38 -- accel/accel.sh@21 -- # val=32 00:25:05.079 08:21:38 -- accel/accel.sh@22 -- # case "$var" in 00:25:05.079 08:21:38 -- accel/accel.sh@20 -- # IFS=: 00:25:05.079 08:21:38 -- accel/accel.sh@20 -- # read -r var val 00:25:05.079 08:21:38 -- accel/accel.sh@21 -- # val=2 00:25:05.337 08:21:38 -- accel/accel.sh@22 -- # case "$var" in 00:25:05.337 08:21:38 -- accel/accel.sh@20 -- # IFS=: 00:25:05.337 08:21:38 -- accel/accel.sh@20 -- # read -r var val 00:25:05.337 08:21:38 -- accel/accel.sh@21 -- # val='1 seconds' 00:25:05.337 08:21:38 -- accel/accel.sh@22 -- # case "$var" in 00:25:05.337 08:21:38 -- accel/accel.sh@20 -- # IFS=: 00:25:05.337 08:21:38 -- accel/accel.sh@20 -- # read -r var val 00:25:05.337 08:21:38 -- accel/accel.sh@21 -- # val=Yes 00:25:05.337 08:21:38 -- accel/accel.sh@22 -- # case "$var" in 00:25:05.337 08:21:38 -- accel/accel.sh@20 -- # IFS=: 00:25:05.337 08:21:38 -- accel/accel.sh@20 -- # read -r var val 00:25:05.337 08:21:38 -- accel/accel.sh@21 -- # val= 00:25:05.337 08:21:38 -- accel/accel.sh@22 -- # case "$var" in 00:25:05.337 08:21:38 -- accel/accel.sh@20 -- # IFS=: 00:25:05.337 08:21:38 -- accel/accel.sh@20 -- # read -r var val 00:25:05.337 08:21:38 -- accel/accel.sh@21 -- # val= 00:25:05.337 08:21:38 -- accel/accel.sh@22 -- # case "$var" in 00:25:05.337 08:21:38 -- accel/accel.sh@20 -- # IFS=: 00:25:05.337 08:21:38 -- accel/accel.sh@20 -- # read -r var val 00:25:06.273 08:21:39 -- accel/accel.sh@21 -- # val= 00:25:06.273 08:21:39 -- accel/accel.sh@22 -- # case "$var" in 00:25:06.273 08:21:39 -- accel/accel.sh@20 -- # IFS=: 00:25:06.273 08:21:39 -- accel/accel.sh@20 -- # read -r var val 00:25:06.273 08:21:39 -- accel/accel.sh@21 -- # val= 00:25:06.273 08:21:39 -- accel/accel.sh@22 -- # case "$var" in 00:25:06.273 08:21:39 -- accel/accel.sh@20 -- # IFS=: 00:25:06.273 08:21:39 -- accel/accel.sh@20 -- # read -r var val 00:25:06.273 08:21:39 -- accel/accel.sh@21 -- # val= 00:25:06.273 08:21:39 -- accel/accel.sh@22 -- # case "$var" in 00:25:06.273 08:21:39 -- accel/accel.sh@20 -- # IFS=: 00:25:06.273 08:21:39 -- accel/accel.sh@20 -- # read -r var val 00:25:06.273 08:21:39 -- accel/accel.sh@21 -- # val= 00:25:06.273 08:21:39 -- accel/accel.sh@22 -- # case "$var" in 00:25:06.273 08:21:39 -- accel/accel.sh@20 -- # IFS=: 00:25:06.274 08:21:39 -- accel/accel.sh@20 -- # read -r var val 00:25:06.274 08:21:39 -- accel/accel.sh@21 -- # val= 00:25:06.274 08:21:39 -- accel/accel.sh@22 -- # case "$var" in 00:25:06.274 08:21:39 -- accel/accel.sh@20 -- # IFS=: 00:25:06.274 08:21:39 -- accel/accel.sh@20 -- # read -r var val 00:25:06.274 08:21:39 -- accel/accel.sh@21 -- # val= 00:25:06.274 08:21:39 -- accel/accel.sh@22 -- # case "$var" in 00:25:06.274 08:21:39 -- accel/accel.sh@20 -- # IFS=: 00:25:06.274 08:21:39 -- accel/accel.sh@20 -- # read -r var val 00:25:06.274 08:21:39 -- accel/accel.sh@21 -- # val= 00:25:06.274 08:21:39 -- accel/accel.sh@22 -- # case "$var" in 00:25:06.274 08:21:39 -- accel/accel.sh@20 -- # IFS=: 00:25:06.274 08:21:39 -- accel/accel.sh@20 -- # read -r var val 00:25:06.274 08:21:39 -- accel/accel.sh@28 -- # [[ -n software ]] 00:25:06.274 08:21:39 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:25:06.274 08:21:39 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:06.274 00:25:06.274 real 0m3.033s 00:25:06.274 user 0m2.652s 00:25:06.274 sys 0m0.183s 00:25:06.274 08:21:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:06.274 08:21:39 -- common/autotest_common.sh@10 -- # set +x 00:25:06.274 ************************************ 00:25:06.274 END TEST accel_deomp_full_mthread 00:25:06.274 ************************************ 00:25:06.532 08:21:39 -- accel/accel.sh@116 -- # [[ n == y ]] 00:25:06.532 08:21:39 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:25:06.532 08:21:39 -- accel/accel.sh@129 -- # build_accel_config 00:25:06.532 08:21:39 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:25:06.532 08:21:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:25:06.532 08:21:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:06.532 08:21:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:25:06.532 08:21:39 -- common/autotest_common.sh@10 -- # set +x 00:25:06.532 08:21:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:25:06.532 08:21:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:25:06.532 08:21:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:25:06.532 08:21:39 -- accel/accel.sh@41 -- # local IFS=, 00:25:06.532 08:21:39 -- accel/accel.sh@42 -- # jq -r . 00:25:06.532 ************************************ 00:25:06.532 START TEST accel_dif_functional_tests 00:25:06.532 ************************************ 00:25:06.532 08:21:39 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:25:06.532 [2024-04-17 08:21:39.701851] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:06.532 [2024-04-17 08:21:39.702025] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57449 ] 00:25:06.532 [2024-04-17 08:21:39.840334] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:06.791 [2024-04-17 08:21:39.946584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:06.791 [2024-04-17 08:21:39.946771] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:06.791 [2024-04-17 08:21:39.946774] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:06.791 00:25:06.791 00:25:06.791 CUnit - A unit testing framework for C - Version 2.1-3 00:25:06.791 http://cunit.sourceforge.net/ 00:25:06.791 00:25:06.791 00:25:06.791 Suite: accel_dif 00:25:06.791 Test: verify: DIF generated, GUARD check ...passed 00:25:06.791 Test: verify: DIF generated, APPTAG check ...passed 00:25:06.791 Test: verify: DIF generated, REFTAG check ...passed 00:25:06.791 Test: verify: DIF not generated, GUARD check ...[2024-04-17 08:21:40.020449] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:25:06.791 passed 00:25:06.791 Test: verify: DIF not generated, APPTAG check ...[2024-04-17 08:21:40.020698] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:25:06.791 [2024-04-17 08:21:40.020766] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:25:06.791 passed 00:25:06.791 Test: verify: DIF not generated, REFTAG check ...[2024-04-17 08:21:40.020875] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:25:06.791 [2024-04-17 08:21:40.020968] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:25:06.791 passed 00:25:06.791 Test: verify: APPTAG correct, APPTAG check ...[2024-04-17 08:21:40.021061] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:25:06.791 passed 00:25:06.791 Test: verify: APPTAG incorrect, APPTAG check ...[2024-04-17 08:21:40.021140] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:25:06.791 passed 00:25:06.791 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:25:06.791 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:25:06.791 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:25:06.791 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-04-17 08:21:40.021359] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:25:06.791 passed 00:25:06.791 Test: generate copy: DIF generated, GUARD check ...passed 00:25:06.791 Test: generate copy: DIF generated, APTTAG check ...passed 00:25:06.791 Test: generate copy: DIF generated, REFTAG check ...passed 00:25:06.791 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:25:06.791 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:25:06.791 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:25:06.791 Test: generate copy: iovecs-len validate ...[2024-04-17 08:21:40.021641] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:25:06.791 passed 00:25:06.791 Test: generate copy: buffer alignment validate ...passed 00:25:06.791 00:25:06.791 Run Summary: Type Total Ran Passed Failed Inactive 00:25:06.791 suites 1 1 n/a 0 0 00:25:06.791 tests 20 20 20 0 0 00:25:06.791 asserts 204 204 204 0 n/a 00:25:06.791 00:25:06.791 Elapsed time = 0.004 seconds 00:25:07.049 00:25:07.049 real 0m0.569s 00:25:07.049 user 0m0.725s 00:25:07.049 sys 0m0.122s 00:25:07.049 ************************************ 00:25:07.049 END TEST accel_dif_functional_tests 00:25:07.049 ************************************ 00:25:07.049 08:21:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:07.049 08:21:40 -- common/autotest_common.sh@10 -- # set +x 00:25:07.049 00:25:07.049 real 1m3.842s 00:25:07.049 user 1m8.328s 00:25:07.049 sys 0m5.549s 00:25:07.049 ************************************ 00:25:07.049 END TEST accel 00:25:07.049 ************************************ 00:25:07.049 08:21:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:07.049 08:21:40 -- common/autotest_common.sh@10 -- # set +x 00:25:07.049 08:21:40 -- spdk/autotest.sh@190 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:25:07.049 08:21:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:07.049 08:21:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:07.049 08:21:40 -- common/autotest_common.sh@10 -- # set +x 00:25:07.049 ************************************ 00:25:07.049 START TEST accel_rpc 00:25:07.049 ************************************ 00:25:07.049 08:21:40 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:25:07.307 * Looking for test storage... 00:25:07.307 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:25:07.307 08:21:40 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:25:07.307 08:21:40 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=57518 00:25:07.307 08:21:40 -- accel/accel_rpc.sh@15 -- # waitforlisten 57518 00:25:07.307 08:21:40 -- common/autotest_common.sh@819 -- # '[' -z 57518 ']' 00:25:07.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:07.307 08:21:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:07.307 08:21:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:07.307 08:21:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:07.307 08:21:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:07.307 08:21:40 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:25:07.307 08:21:40 -- common/autotest_common.sh@10 -- # set +x 00:25:07.307 [2024-04-17 08:21:40.488056] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:07.307 [2024-04-17 08:21:40.488131] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57518 ] 00:25:07.307 [2024-04-17 08:21:40.611554] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:07.566 [2024-04-17 08:21:40.739221] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:07.566 [2024-04-17 08:21:40.739501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:08.134 08:21:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:08.134 08:21:41 -- common/autotest_common.sh@852 -- # return 0 00:25:08.134 08:21:41 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:25:08.134 08:21:41 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:25:08.134 08:21:41 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:25:08.134 08:21:41 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:25:08.134 08:21:41 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:25:08.134 08:21:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:08.134 08:21:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:08.134 08:21:41 -- common/autotest_common.sh@10 -- # set +x 00:25:08.134 ************************************ 00:25:08.134 START TEST accel_assign_opcode 00:25:08.134 ************************************ 00:25:08.134 08:21:41 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:25:08.134 08:21:41 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:25:08.134 08:21:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:08.134 08:21:41 -- common/autotest_common.sh@10 -- # set +x 00:25:08.134 [2024-04-17 08:21:41.354740] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:25:08.134 08:21:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:08.134 08:21:41 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:25:08.134 08:21:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:08.134 08:21:41 -- common/autotest_common.sh@10 -- # set +x 00:25:08.134 [2024-04-17 08:21:41.366713] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:25:08.134 08:21:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:08.134 08:21:41 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:25:08.134 08:21:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:08.134 08:21:41 -- common/autotest_common.sh@10 -- # set +x 00:25:08.392 08:21:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:08.392 08:21:41 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:25:08.392 08:21:41 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:25:08.392 08:21:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:08.392 08:21:41 -- accel/accel_rpc.sh@42 -- # grep software 00:25:08.392 08:21:41 -- common/autotest_common.sh@10 -- # set +x 00:25:08.392 08:21:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:08.392 software 00:25:08.392 00:25:08.392 real 0m0.257s 00:25:08.392 user 0m0.049s 00:25:08.392 sys 0m0.015s 00:25:08.392 ************************************ 00:25:08.392 END TEST accel_assign_opcode 00:25:08.392 ************************************ 00:25:08.392 08:21:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:08.392 08:21:41 -- common/autotest_common.sh@10 -- # set +x 00:25:08.392 08:21:41 -- accel/accel_rpc.sh@55 -- # killprocess 57518 00:25:08.392 08:21:41 -- common/autotest_common.sh@926 -- # '[' -z 57518 ']' 00:25:08.392 08:21:41 -- common/autotest_common.sh@930 -- # kill -0 57518 00:25:08.392 08:21:41 -- common/autotest_common.sh@931 -- # uname 00:25:08.393 08:21:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:08.393 08:21:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57518 00:25:08.393 killing process with pid 57518 00:25:08.393 08:21:41 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:08.393 08:21:41 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:08.393 08:21:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57518' 00:25:08.393 08:21:41 -- common/autotest_common.sh@945 -- # kill 57518 00:25:08.393 08:21:41 -- common/autotest_common.sh@950 -- # wait 57518 00:25:08.961 00:25:08.961 real 0m1.697s 00:25:08.961 user 0m1.700s 00:25:08.961 sys 0m0.428s 00:25:08.961 08:21:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:08.961 ************************************ 00:25:08.961 END TEST accel_rpc 00:25:08.961 ************************************ 00:25:08.961 08:21:42 -- common/autotest_common.sh@10 -- # set +x 00:25:08.961 08:21:42 -- spdk/autotest.sh@191 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:25:08.961 08:21:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:08.961 08:21:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:08.961 08:21:42 -- common/autotest_common.sh@10 -- # set +x 00:25:08.961 ************************************ 00:25:08.961 START TEST app_cmdline 00:25:08.961 ************************************ 00:25:08.961 08:21:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:25:08.961 * Looking for test storage... 00:25:08.961 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:25:08.961 08:21:42 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:25:08.961 08:21:42 -- app/cmdline.sh@17 -- # spdk_tgt_pid=57600 00:25:08.961 08:21:42 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:25:08.961 08:21:42 -- app/cmdline.sh@18 -- # waitforlisten 57600 00:25:08.961 08:21:42 -- common/autotest_common.sh@819 -- # '[' -z 57600 ']' 00:25:08.961 08:21:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:08.961 08:21:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:08.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:08.961 08:21:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:08.961 08:21:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:08.961 08:21:42 -- common/autotest_common.sh@10 -- # set +x 00:25:08.961 [2024-04-17 08:21:42.258572] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:08.961 [2024-04-17 08:21:42.258654] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57600 ] 00:25:09.220 [2024-04-17 08:21:42.394917] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:09.220 [2024-04-17 08:21:42.488719] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:09.220 [2024-04-17 08:21:42.488856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:09.787 08:21:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:09.787 08:21:43 -- common/autotest_common.sh@852 -- # return 0 00:25:09.787 08:21:43 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:25:10.046 { 00:25:10.046 "version": "SPDK v24.01.1-pre git sha1 36faa8c31", 00:25:10.046 "fields": { 00:25:10.046 "major": 24, 00:25:10.046 "minor": 1, 00:25:10.046 "patch": 1, 00:25:10.046 "suffix": "-pre", 00:25:10.046 "commit": "36faa8c31" 00:25:10.046 } 00:25:10.046 } 00:25:10.046 08:21:43 -- app/cmdline.sh@22 -- # expected_methods=() 00:25:10.046 08:21:43 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:25:10.046 08:21:43 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:25:10.046 08:21:43 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:25:10.046 08:21:43 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:25:10.046 08:21:43 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:25:10.046 08:21:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:10.046 08:21:43 -- app/cmdline.sh@26 -- # sort 00:25:10.046 08:21:43 -- common/autotest_common.sh@10 -- # set +x 00:25:10.046 08:21:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:10.046 08:21:43 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:25:10.046 08:21:43 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:25:10.046 08:21:43 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:25:10.046 08:21:43 -- common/autotest_common.sh@640 -- # local es=0 00:25:10.046 08:21:43 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:25:10.046 08:21:43 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:10.046 08:21:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:10.046 08:21:43 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:10.046 08:21:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:10.046 08:21:43 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:10.046 08:21:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:10.047 08:21:43 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:10.047 08:21:43 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:25:10.047 08:21:43 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:25:10.306 request: 00:25:10.306 { 00:25:10.306 "method": "env_dpdk_get_mem_stats", 00:25:10.306 "req_id": 1 00:25:10.306 } 00:25:10.306 Got JSON-RPC error response 00:25:10.306 response: 00:25:10.306 { 00:25:10.306 "code": -32601, 00:25:10.306 "message": "Method not found" 00:25:10.306 } 00:25:10.306 08:21:43 -- common/autotest_common.sh@643 -- # es=1 00:25:10.306 08:21:43 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:10.306 08:21:43 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:25:10.306 08:21:43 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:10.306 08:21:43 -- app/cmdline.sh@1 -- # killprocess 57600 00:25:10.306 08:21:43 -- common/autotest_common.sh@926 -- # '[' -z 57600 ']' 00:25:10.306 08:21:43 -- common/autotest_common.sh@930 -- # kill -0 57600 00:25:10.306 08:21:43 -- common/autotest_common.sh@931 -- # uname 00:25:10.306 08:21:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:10.306 08:21:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57600 00:25:10.306 killing process with pid 57600 00:25:10.306 08:21:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:10.306 08:21:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:10.306 08:21:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57600' 00:25:10.306 08:21:43 -- common/autotest_common.sh@945 -- # kill 57600 00:25:10.306 08:21:43 -- common/autotest_common.sh@950 -- # wait 57600 00:25:10.873 00:25:10.873 real 0m1.821s 00:25:10.873 user 0m2.174s 00:25:10.873 sys 0m0.384s 00:25:10.873 ************************************ 00:25:10.873 END TEST app_cmdline 00:25:10.873 ************************************ 00:25:10.873 08:21:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:10.873 08:21:43 -- common/autotest_common.sh@10 -- # set +x 00:25:10.873 08:21:43 -- spdk/autotest.sh@192 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:25:10.873 08:21:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:10.873 08:21:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:10.873 08:21:43 -- common/autotest_common.sh@10 -- # set +x 00:25:10.873 ************************************ 00:25:10.873 START TEST version 00:25:10.873 ************************************ 00:25:10.873 08:21:43 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:25:10.873 * Looking for test storage... 00:25:10.873 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:25:10.873 08:21:44 -- app/version.sh@17 -- # get_header_version major 00:25:10.873 08:21:44 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:25:10.873 08:21:44 -- app/version.sh@14 -- # cut -f2 00:25:10.873 08:21:44 -- app/version.sh@14 -- # tr -d '"' 00:25:10.873 08:21:44 -- app/version.sh@17 -- # major=24 00:25:10.873 08:21:44 -- app/version.sh@18 -- # get_header_version minor 00:25:10.873 08:21:44 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:25:10.873 08:21:44 -- app/version.sh@14 -- # cut -f2 00:25:10.873 08:21:44 -- app/version.sh@14 -- # tr -d '"' 00:25:10.873 08:21:44 -- app/version.sh@18 -- # minor=1 00:25:10.873 08:21:44 -- app/version.sh@19 -- # get_header_version patch 00:25:10.873 08:21:44 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:25:10.873 08:21:44 -- app/version.sh@14 -- # cut -f2 00:25:10.873 08:21:44 -- app/version.sh@14 -- # tr -d '"' 00:25:10.873 08:21:44 -- app/version.sh@19 -- # patch=1 00:25:10.873 08:21:44 -- app/version.sh@20 -- # get_header_version suffix 00:25:10.873 08:21:44 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:25:10.873 08:21:44 -- app/version.sh@14 -- # cut -f2 00:25:10.873 08:21:44 -- app/version.sh@14 -- # tr -d '"' 00:25:10.873 08:21:44 -- app/version.sh@20 -- # suffix=-pre 00:25:10.873 08:21:44 -- app/version.sh@22 -- # version=24.1 00:25:10.873 08:21:44 -- app/version.sh@25 -- # (( patch != 0 )) 00:25:10.873 08:21:44 -- app/version.sh@25 -- # version=24.1.1 00:25:10.873 08:21:44 -- app/version.sh@28 -- # version=24.1.1rc0 00:25:10.873 08:21:44 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:25:10.873 08:21:44 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:25:10.873 08:21:44 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:25:10.873 08:21:44 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:25:10.873 00:25:10.873 real 0m0.186s 00:25:10.873 user 0m0.101s 00:25:10.873 sys 0m0.131s 00:25:10.873 ************************************ 00:25:10.873 END TEST version 00:25:10.873 ************************************ 00:25:10.873 08:21:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:10.873 08:21:44 -- common/autotest_common.sh@10 -- # set +x 00:25:11.131 08:21:44 -- spdk/autotest.sh@194 -- # '[' 0 -eq 1 ']' 00:25:11.131 08:21:44 -- spdk/autotest.sh@204 -- # uname -s 00:25:11.131 08:21:44 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:25:11.131 08:21:44 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:25:11.131 08:21:44 -- spdk/autotest.sh@205 -- # [[ 1 -eq 1 ]] 00:25:11.131 08:21:44 -- spdk/autotest.sh@211 -- # [[ 0 -eq 0 ]] 00:25:11.131 08:21:44 -- spdk/autotest.sh@212 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:25:11.131 08:21:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:11.132 08:21:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:11.132 08:21:44 -- common/autotest_common.sh@10 -- # set +x 00:25:11.132 ************************************ 00:25:11.132 START TEST spdk_dd 00:25:11.132 ************************************ 00:25:11.132 08:21:44 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:25:11.132 * Looking for test storage... 00:25:11.132 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:25:11.132 08:21:44 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:11.132 08:21:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:11.132 08:21:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:11.132 08:21:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:11.132 08:21:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.132 08:21:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.132 08:21:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.132 08:21:44 -- paths/export.sh@5 -- # export PATH 00:25:11.132 08:21:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.132 08:21:44 -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:11.391 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:11.652 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:11.652 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:11.652 08:21:44 -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:25:11.652 08:21:44 -- dd/dd.sh@11 -- # nvme_in_userspace 00:25:11.652 08:21:44 -- scripts/common.sh@311 -- # local bdf bdfs 00:25:11.652 08:21:44 -- scripts/common.sh@312 -- # local nvmes 00:25:11.652 08:21:44 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:25:11.652 08:21:44 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:25:11.652 08:21:44 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:25:11.652 08:21:44 -- scripts/common.sh@297 -- # local bdf= 00:25:11.652 08:21:44 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:25:11.652 08:21:44 -- scripts/common.sh@232 -- # local class 00:25:11.652 08:21:44 -- scripts/common.sh@233 -- # local subclass 00:25:11.652 08:21:44 -- scripts/common.sh@234 -- # local progif 00:25:11.652 08:21:44 -- scripts/common.sh@235 -- # printf %02x 1 00:25:11.652 08:21:44 -- scripts/common.sh@235 -- # class=01 00:25:11.652 08:21:44 -- scripts/common.sh@236 -- # printf %02x 8 00:25:11.652 08:21:44 -- scripts/common.sh@236 -- # subclass=08 00:25:11.652 08:21:44 -- scripts/common.sh@237 -- # printf %02x 2 00:25:11.652 08:21:44 -- scripts/common.sh@237 -- # progif=02 00:25:11.652 08:21:44 -- scripts/common.sh@239 -- # hash lspci 00:25:11.652 08:21:44 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:25:11.652 08:21:44 -- scripts/common.sh@242 -- # grep -i -- -p02 00:25:11.652 08:21:44 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:25:11.652 08:21:44 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:25:11.652 08:21:44 -- scripts/common.sh@244 -- # tr -d '"' 00:25:11.652 08:21:44 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:25:11.652 08:21:44 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:25:11.652 08:21:44 -- scripts/common.sh@15 -- # local i 00:25:11.652 08:21:44 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:25:11.652 08:21:44 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:25:11.652 08:21:44 -- scripts/common.sh@24 -- # return 0 00:25:11.652 08:21:44 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:25:11.652 08:21:44 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:25:11.652 08:21:44 -- scripts/common.sh@300 -- # pci_can_use 0000:00:07.0 00:25:11.652 08:21:44 -- scripts/common.sh@15 -- # local i 00:25:11.652 08:21:44 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:25:11.652 08:21:44 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:25:11.652 08:21:44 -- scripts/common.sh@24 -- # return 0 00:25:11.652 08:21:44 -- scripts/common.sh@301 -- # echo 0000:00:07.0 00:25:11.652 08:21:44 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:25:11.652 08:21:44 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:25:11.652 08:21:44 -- scripts/common.sh@322 -- # uname -s 00:25:11.652 08:21:44 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:25:11.652 08:21:44 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:25:11.652 08:21:44 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:25:11.652 08:21:44 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:07.0 ]] 00:25:11.652 08:21:44 -- scripts/common.sh@322 -- # uname -s 00:25:11.652 08:21:44 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:25:11.652 08:21:44 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:25:11.652 08:21:44 -- scripts/common.sh@327 -- # (( 2 )) 00:25:11.652 08:21:44 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:25:11.652 08:21:44 -- dd/dd.sh@13 -- # check_liburing 00:25:11.652 08:21:44 -- dd/common.sh@139 -- # local lib so 00:25:11.652 08:21:44 -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:25:11.652 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.652 08:21:44 -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:25:11.652 08:21:44 -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:11.652 08:21:44 -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:25:11.652 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.652 08:21:44 -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.5.0 == liburing.so.* ]] 00:25:11.652 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.652 08:21:44 -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.5.0 == liburing.so.* ]] 00:25:11.652 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.652 08:21:44 -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.6.0 == liburing.so.* ]] 00:25:11.652 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.652 08:21:44 -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.5.0 == liburing.so.* ]] 00:25:11.652 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.652 08:21:44 -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.5.0 == liburing.so.* ]] 00:25:11.652 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.652 08:21:44 -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.5.0 == liburing.so.* ]] 00:25:11.652 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.652 08:21:44 -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.5.0 == liburing.so.* ]] 00:25:11.652 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.652 08:21:44 -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.5.0 == liburing.so.* ]] 00:25:11.652 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.652 08:21:44 -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.5.0 == liburing.so.* ]] 00:25:11.652 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.652 08:21:44 -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.5.0 == liburing.so.* ]] 00:25:11.652 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.652 08:21:44 -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.5.0 == liburing.so.* ]] 00:25:11.652 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.652 08:21:44 -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.5.0 == liburing.so.* ]] 00:25:11.652 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.652 08:21:44 -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.9.0 == liburing.so.* ]] 00:25:11.652 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.652 08:21:44 -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.10.1 == liburing.so.* ]] 00:25:11.652 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.652 08:21:44 -- dd/common.sh@143 -- # [[ libspdk_lvol.so.9.1 == liburing.so.* ]] 00:25:11.652 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.652 08:21:44 -- dd/common.sh@143 -- # [[ libspdk_blob.so.10.1 == liburing.so.* ]] 00:25:11.652 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.652 08:21:44 -- dd/common.sh@143 -- # [[ libspdk_nvme.so.12.0 == liburing.so.* ]] 00:25:11.652 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.652 08:21:44 -- dd/common.sh@143 -- # [[ libspdk_rdma.so.5.0 == liburing.so.* ]] 00:25:11.652 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.652 08:21:44 -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.5.0 == liburing.so.* ]] 00:25:11.652 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.652 08:21:44 -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.5.0 == liburing.so.* ]] 00:25:11.652 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.652 08:21:44 -- dd/common.sh@143 -- # [[ libspdk_ftl.so.8.0 == liburing.so.* ]] 00:25:11.652 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.652 08:21:44 -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.5.0 == liburing.so.* ]] 00:25:11.652 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.652 08:21:44 -- dd/common.sh@143 -- # [[ libspdk_virtio.so.6.0 == liburing.so.* ]] 00:25:11.652 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.652 08:21:44 -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.4.0 == liburing.so.* ]] 00:25:11.652 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.652 08:21:44 -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.5.0 == liburing.so.* ]] 00:25:11.652 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.652 08:21:44 -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.5.0 == liburing.so.* ]] 00:25:11.652 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.652 08:21:44 -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.1.0 == liburing.so.* ]] 00:25:11.652 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.652 08:21:44 -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.5.0 == liburing.so.* ]] 00:25:11.652 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.652 08:21:44 -- dd/common.sh@143 -- # [[ libspdk_ioat.so.6.0 == liburing.so.* ]] 00:25:11.652 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.652 08:21:44 -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.4.0 == liburing.so.* ]] 00:25:11.652 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.652 08:21:44 -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.2.0 == liburing.so.* ]] 00:25:11.652 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.652 08:21:44 -- dd/common.sh@143 -- # [[ libspdk_idxd.so.11.0 == liburing.so.* ]] 00:25:11.652 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.652 08:21:44 -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.3.0 == liburing.so.* ]] 00:25:11.652 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.652 08:21:44 -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.13.0 == liburing.so.* ]] 00:25:11.652 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.652 08:21:44 -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.3.0 == liburing.so.* ]] 00:25:11.652 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.652 08:21:44 -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.3.0 == liburing.so.* ]] 00:25:11.652 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.653 08:21:44 -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.5.0 == liburing.so.* ]] 00:25:11.653 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.653 08:21:44 -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.4.0 == liburing.so.* ]] 00:25:11.653 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.653 08:21:44 -- dd/common.sh@143 -- # [[ libspdk_vfu_device.so.2.0 == liburing.so.* ]] 00:25:11.653 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.653 08:21:44 -- dd/common.sh@143 -- # [[ libspdk_scsi.so.8.0 == liburing.so.* ]] 00:25:11.653 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.653 08:21:44 -- dd/common.sh@143 -- # [[ libspdk_vfu_tgt.so.2.0 == liburing.so.* ]] 00:25:11.653 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.653 08:21:44 -- dd/common.sh@143 -- # [[ libspdk_event.so.12.0 == liburing.so.* ]] 00:25:11.653 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.653 08:21:44 -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.5.0 == liburing.so.* ]] 00:25:11.653 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.653 08:21:44 -- dd/common.sh@143 -- # [[ libspdk_bdev.so.14.0 == liburing.so.* ]] 00:25:11.653 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.653 08:21:44 -- dd/common.sh@143 -- # [[ libspdk_notify.so.5.0 == liburing.so.* ]] 00:25:11.653 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.653 08:21:44 -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.5.0 == liburing.so.* ]] 00:25:11.653 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.653 08:21:44 -- dd/common.sh@143 -- # [[ libspdk_accel.so.14.0 == liburing.so.* ]] 00:25:11.653 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.653 08:21:44 -- dd/common.sh@143 -- # [[ libspdk_dma.so.3.0 == liburing.so.* ]] 00:25:11.653 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.653 08:21:44 -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.5.0 == liburing.so.* ]] 00:25:11.653 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.653 08:21:44 -- dd/common.sh@143 -- # [[ libspdk_vmd.so.5.0 == liburing.so.* ]] 00:25:11.653 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.653 08:21:44 -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.4.0 == liburing.so.* ]] 00:25:11.653 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.653 08:21:44 -- dd/common.sh@143 -- # [[ libspdk_sock.so.8.0 == liburing.so.* ]] 00:25:11.653 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.653 08:21:44 -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.2.0 == liburing.so.* ]] 00:25:11.653 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.653 08:21:44 -- dd/common.sh@143 -- # [[ libspdk_init.so.4.0 == liburing.so.* ]] 00:25:11.653 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.653 08:21:44 -- dd/common.sh@143 -- # [[ libspdk_thread.so.9.0 == liburing.so.* ]] 00:25:11.653 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.653 08:21:44 -- dd/common.sh@143 -- # [[ libspdk_trace.so.9.0 == liburing.so.* ]] 00:25:11.653 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.653 08:21:44 -- dd/common.sh@143 -- # [[ libspdk_rpc.so.5.0 == liburing.so.* ]] 00:25:11.653 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.653 08:21:44 -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.5.1 == liburing.so.* ]] 00:25:11.653 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.653 08:21:44 -- dd/common.sh@143 -- # [[ libspdk_json.so.5.1 == liburing.so.* ]] 00:25:11.653 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.653 08:21:44 -- dd/common.sh@143 -- # [[ libspdk_util.so.8.0 == liburing.so.* ]] 00:25:11.653 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.653 08:21:44 -- dd/common.sh@143 -- # [[ libspdk_log.so.6.1 == liburing.so.* ]] 00:25:11.653 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.653 08:21:44 -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:25:11.653 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.653 08:21:44 -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:25:11.653 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.653 08:21:44 -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:25:11.653 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.653 08:21:44 -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:25:11.653 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.653 08:21:44 -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:25:11.653 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.653 08:21:44 -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:25:11.653 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.653 08:21:44 -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:25:11.653 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.653 08:21:44 -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:25:11.653 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.653 08:21:44 -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:25:11.653 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.653 08:21:44 -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:25:11.653 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.653 08:21:44 -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:25:11.653 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.653 08:21:44 -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:25:11.653 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.653 08:21:44 -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:25:11.653 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.653 08:21:44 -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:25:11.653 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.653 08:21:44 -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:25:11.653 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.653 08:21:44 -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:25:11.653 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.653 08:21:44 -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:25:11.653 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.653 08:21:44 -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:25:11.653 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.653 08:21:44 -- dd/common.sh@143 -- # [[ libisal_crypto.so.2 == liburing.so.* ]] 00:25:11.653 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.653 08:21:44 -- dd/common.sh@143 -- # [[ libaio.so.1 == liburing.so.* ]] 00:25:11.653 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.653 08:21:44 -- dd/common.sh@143 -- # [[ libiscsi.so.9 == liburing.so.* ]] 00:25:11.653 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.653 08:21:44 -- dd/common.sh@143 -- # [[ libubsan.so.1 == liburing.so.* ]] 00:25:11.653 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.653 08:21:44 -- dd/common.sh@143 -- # [[ libc.so.6 == liburing.so.* ]] 00:25:11.653 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.653 08:21:44 -- dd/common.sh@143 -- # [[ libibverbs.so.1 == liburing.so.* ]] 00:25:11.653 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.653 08:21:44 -- dd/common.sh@143 -- # [[ librdmacm.so.1 == liburing.so.* ]] 00:25:11.653 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.653 08:21:44 -- dd/common.sh@143 -- # [[ libfuse3.so.3 == liburing.so.* ]] 00:25:11.653 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.653 08:21:44 -- dd/common.sh@143 -- # [[ /lib64/ld-linux-x86-64.so.2 == liburing.so.* ]] 00:25:11.653 08:21:44 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:11.653 08:21:44 -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:25:11.653 08:21:44 -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:25:11.653 * spdk_dd linked to liburing 00:25:11.653 08:21:44 -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:25:11.653 08:21:44 -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:25:11.653 08:21:44 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:25:11.653 08:21:44 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:25:11.653 08:21:44 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:25:11.653 08:21:44 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:25:11.653 08:21:44 -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:25:11.653 08:21:44 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:25:11.653 08:21:44 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:25:11.653 08:21:44 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:25:11.653 08:21:44 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:25:11.653 08:21:44 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:25:11.653 08:21:44 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:25:11.653 08:21:44 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:25:11.653 08:21:44 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:25:11.653 08:21:44 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:25:11.653 08:21:44 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:25:11.653 08:21:44 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:25:11.653 08:21:44 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:25:11.653 08:21:44 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:25:11.653 08:21:44 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:25:11.653 08:21:44 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:25:11.653 08:21:44 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:25:11.653 08:21:44 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:25:11.653 08:21:44 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:25:11.653 08:21:44 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:25:11.653 08:21:44 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:25:11.653 08:21:44 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:25:11.653 08:21:44 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:25:11.653 08:21:44 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:25:11.653 08:21:44 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:25:11.653 08:21:44 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:25:11.653 08:21:44 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:25:11.653 08:21:44 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:25:11.653 08:21:44 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:25:11.653 08:21:44 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:25:11.653 08:21:44 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:25:11.653 08:21:44 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:25:11.653 08:21:44 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:25:11.653 08:21:44 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:25:11.653 08:21:44 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:25:11.653 08:21:44 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:25:11.653 08:21:44 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:25:11.653 08:21:44 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:25:11.653 08:21:44 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:25:11.653 08:21:44 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:25:11.653 08:21:44 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:25:11.653 08:21:44 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:25:11.653 08:21:44 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:25:11.653 08:21:44 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:25:11.653 08:21:44 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:25:11.653 08:21:44 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:25:11.653 08:21:44 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=y 00:25:11.653 08:21:44 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:25:11.653 08:21:44 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=y 00:25:11.653 08:21:44 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:25:11.653 08:21:44 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:25:11.653 08:21:44 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:25:11.653 08:21:44 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:25:11.653 08:21:44 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:25:11.653 08:21:44 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:25:11.653 08:21:44 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=n 00:25:11.653 08:21:44 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR= 00:25:11.653 08:21:44 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:25:11.653 08:21:44 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:25:11.654 08:21:44 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:25:11.654 08:21:44 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:25:11.654 08:21:44 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:25:11.654 08:21:44 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:25:11.654 08:21:44 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:25:11.654 08:21:44 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:25:11.654 08:21:44 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:25:11.654 08:21:44 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:25:11.654 08:21:44 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:25:11.654 08:21:44 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:25:11.654 08:21:44 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:25:11.654 08:21:44 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:25:11.654 08:21:44 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:25:11.654 08:21:44 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:25:11.654 08:21:44 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:25:11.654 08:21:44 -- common/build_config.sh@79 -- # CONFIG_URING=y 00:25:11.654 08:21:44 -- dd/common.sh@149 -- # [[ y != y ]] 00:25:11.654 08:21:44 -- dd/common.sh@152 -- # [[ ! -e /usr/lib64/liburing.so.2 ]] 00:25:11.654 08:21:44 -- dd/common.sh@156 -- # export liburing_in_use=1 00:25:11.654 08:21:44 -- dd/common.sh@156 -- # liburing_in_use=1 00:25:11.654 08:21:44 -- dd/common.sh@157 -- # return 0 00:25:11.654 08:21:44 -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:25:11.654 08:21:44 -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 0000:00:07.0 00:25:11.654 08:21:44 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:25:11.654 08:21:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:11.654 08:21:44 -- common/autotest_common.sh@10 -- # set +x 00:25:11.654 ************************************ 00:25:11.654 START TEST spdk_dd_basic_rw 00:25:11.654 ************************************ 00:25:11.654 08:21:44 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 0000:00:07.0 00:25:11.913 * Looking for test storage... 00:25:11.913 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:25:11.913 08:21:45 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:11.913 08:21:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:11.913 08:21:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:11.913 08:21:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:11.913 08:21:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.913 08:21:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.913 08:21:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.913 08:21:45 -- paths/export.sh@5 -- # export PATH 00:25:11.913 08:21:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.913 08:21:45 -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:25:11.913 08:21:45 -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:25:11.913 08:21:45 -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:25:11.913 08:21:45 -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:06.0 00:25:11.913 08:21:45 -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:25:11.913 08:21:45 -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:06.0' ['trtype']='pcie') 00:25:11.913 08:21:45 -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:25:11.913 08:21:45 -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:25:11.913 08:21:45 -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:11.913 08:21:45 -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:06.0 00:25:11.913 08:21:45 -- dd/common.sh@124 -- # local pci=0000:00:06.0 lbaf id 00:25:11.913 08:21:45 -- dd/common.sh@126 -- # mapfile -t id 00:25:11.913 08:21:45 -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:06.0' 00:25:12.175 08:21:45 -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 100 Data Units Written: 7 Host Read Commands: 2133 Host Write Commands: 92 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:25:12.175 08:21:45 -- dd/common.sh@130 -- # lbaf=04 00:25:12.175 08:21:45 -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 100 Data Units Written: 7 Host Read Commands: 2133 Host Write Commands: 92 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:25:12.175 08:21:45 -- dd/common.sh@132 -- # lbaf=4096 00:25:12.175 08:21:45 -- dd/common.sh@134 -- # echo 4096 00:25:12.175 08:21:45 -- dd/basic_rw.sh@93 -- # native_bs=4096 00:25:12.175 08:21:45 -- dd/basic_rw.sh@96 -- # : 00:25:12.175 08:21:45 -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:25:12.175 08:21:45 -- dd/basic_rw.sh@96 -- # gen_conf 00:25:12.175 08:21:45 -- dd/common.sh@31 -- # xtrace_disable 00:25:12.175 08:21:45 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:25:12.175 08:21:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:12.175 08:21:45 -- common/autotest_common.sh@10 -- # set +x 00:25:12.175 08:21:45 -- common/autotest_common.sh@10 -- # set +x 00:25:12.175 ************************************ 00:25:12.175 START TEST dd_bs_lt_native_bs 00:25:12.175 ************************************ 00:25:12.175 08:21:45 -- common/autotest_common.sh@1104 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:25:12.175 08:21:45 -- common/autotest_common.sh@640 -- # local es=0 00:25:12.175 08:21:45 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:25:12.175 08:21:45 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:12.175 08:21:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:12.175 08:21:45 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:12.175 08:21:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:12.175 08:21:45 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:12.175 08:21:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:12.175 08:21:45 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:12.175 08:21:45 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:25:12.176 08:21:45 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:25:12.176 { 00:25:12.176 "subsystems": [ 00:25:12.176 { 00:25:12.176 "subsystem": "bdev", 00:25:12.176 "config": [ 00:25:12.176 { 00:25:12.176 "params": { 00:25:12.176 "trtype": "pcie", 00:25:12.176 "traddr": "0000:00:06.0", 00:25:12.176 "name": "Nvme0" 00:25:12.176 }, 00:25:12.176 "method": "bdev_nvme_attach_controller" 00:25:12.176 }, 00:25:12.176 { 00:25:12.176 "method": "bdev_wait_for_examine" 00:25:12.176 } 00:25:12.176 ] 00:25:12.176 } 00:25:12.176 ] 00:25:12.176 } 00:25:12.176 [2024-04-17 08:21:45.331447] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:12.176 [2024-04-17 08:21:45.331532] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57917 ] 00:25:12.176 [2024-04-17 08:21:45.486182] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:12.435 [2024-04-17 08:21:45.603584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:12.435 [2024-04-17 08:21:45.737332] spdk_dd.c:1145:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:25:12.435 [2024-04-17 08:21:45.737390] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:12.694 [2024-04-17 08:21:45.839918] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:25:12.694 08:21:45 -- common/autotest_common.sh@643 -- # es=234 00:25:12.694 08:21:45 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:12.694 ************************************ 00:25:12.694 END TEST dd_bs_lt_native_bs 00:25:12.694 ************************************ 00:25:12.694 08:21:45 -- common/autotest_common.sh@652 -- # es=106 00:25:12.694 08:21:45 -- common/autotest_common.sh@653 -- # case "$es" in 00:25:12.694 08:21:45 -- common/autotest_common.sh@660 -- # es=1 00:25:12.694 08:21:45 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:12.694 00:25:12.694 real 0m0.682s 00:25:12.694 user 0m0.489s 00:25:12.694 sys 0m0.150s 00:25:12.694 08:21:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:12.694 08:21:45 -- common/autotest_common.sh@10 -- # set +x 00:25:12.694 08:21:46 -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:25:12.694 08:21:46 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:12.694 08:21:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:12.694 08:21:46 -- common/autotest_common.sh@10 -- # set +x 00:25:12.694 ************************************ 00:25:12.694 START TEST dd_rw 00:25:12.694 ************************************ 00:25:12.694 08:21:46 -- common/autotest_common.sh@1104 -- # basic_rw 4096 00:25:12.694 08:21:46 -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:25:12.694 08:21:46 -- dd/basic_rw.sh@12 -- # local count size 00:25:12.694 08:21:46 -- dd/basic_rw.sh@13 -- # local qds bss 00:25:12.694 08:21:46 -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:25:12.952 08:21:46 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:25:12.952 08:21:46 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:25:12.952 08:21:46 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:25:12.952 08:21:46 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:25:12.952 08:21:46 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:25:12.952 08:21:46 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:25:12.952 08:21:46 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:25:12.952 08:21:46 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:25:12.952 08:21:46 -- dd/basic_rw.sh@23 -- # count=15 00:25:12.952 08:21:46 -- dd/basic_rw.sh@24 -- # count=15 00:25:12.952 08:21:46 -- dd/basic_rw.sh@25 -- # size=61440 00:25:12.952 08:21:46 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:25:12.952 08:21:46 -- dd/common.sh@98 -- # xtrace_disable 00:25:12.952 08:21:46 -- common/autotest_common.sh@10 -- # set +x 00:25:13.214 08:21:46 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:25:13.214 08:21:46 -- dd/basic_rw.sh@30 -- # gen_conf 00:25:13.214 08:21:46 -- dd/common.sh@31 -- # xtrace_disable 00:25:13.214 08:21:46 -- common/autotest_common.sh@10 -- # set +x 00:25:13.214 [2024-04-17 08:21:46.527282] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:13.214 [2024-04-17 08:21:46.527362] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57954 ] 00:25:13.214 { 00:25:13.214 "subsystems": [ 00:25:13.214 { 00:25:13.214 "subsystem": "bdev", 00:25:13.214 "config": [ 00:25:13.214 { 00:25:13.214 "params": { 00:25:13.214 "trtype": "pcie", 00:25:13.214 "traddr": "0000:00:06.0", 00:25:13.214 "name": "Nvme0" 00:25:13.214 }, 00:25:13.214 "method": "bdev_nvme_attach_controller" 00:25:13.214 }, 00:25:13.214 { 00:25:13.214 "method": "bdev_wait_for_examine" 00:25:13.214 } 00:25:13.214 ] 00:25:13.214 } 00:25:13.214 ] 00:25:13.214 } 00:25:13.476 [2024-04-17 08:21:46.653464] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:13.476 [2024-04-17 08:21:46.751008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:13.994  Copying: 60/60 [kB] (average 29 MBps) 00:25:13.994 00:25:13.994 08:21:47 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:25:13.994 08:21:47 -- dd/basic_rw.sh@37 -- # gen_conf 00:25:13.994 08:21:47 -- dd/common.sh@31 -- # xtrace_disable 00:25:13.994 08:21:47 -- common/autotest_common.sh@10 -- # set +x 00:25:13.994 [2024-04-17 08:21:47.171489] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:13.994 [2024-04-17 08:21:47.171648] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57966 ] 00:25:13.994 { 00:25:13.994 "subsystems": [ 00:25:13.994 { 00:25:13.994 "subsystem": "bdev", 00:25:13.994 "config": [ 00:25:13.994 { 00:25:13.994 "params": { 00:25:13.994 "trtype": "pcie", 00:25:13.994 "traddr": "0000:00:06.0", 00:25:13.994 "name": "Nvme0" 00:25:13.994 }, 00:25:13.994 "method": "bdev_nvme_attach_controller" 00:25:13.994 }, 00:25:13.994 { 00:25:13.994 "method": "bdev_wait_for_examine" 00:25:13.994 } 00:25:13.994 ] 00:25:13.994 } 00:25:13.994 ] 00:25:13.994 } 00:25:13.994 [2024-04-17 08:21:47.310480] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:14.253 [2024-04-17 08:21:47.411199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:14.512  Copying: 60/60 [kB] (average 19 MBps) 00:25:14.512 00:25:14.512 08:21:47 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:14.512 08:21:47 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:25:14.512 08:21:47 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:25:14.512 08:21:47 -- dd/common.sh@11 -- # local nvme_ref= 00:25:14.512 08:21:47 -- dd/common.sh@12 -- # local size=61440 00:25:14.512 08:21:47 -- dd/common.sh@14 -- # local bs=1048576 00:25:14.512 08:21:47 -- dd/common.sh@15 -- # local count=1 00:25:14.512 08:21:47 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:25:14.512 08:21:47 -- dd/common.sh@18 -- # gen_conf 00:25:14.512 08:21:47 -- dd/common.sh@31 -- # xtrace_disable 00:25:14.512 08:21:47 -- common/autotest_common.sh@10 -- # set +x 00:25:14.512 [2024-04-17 08:21:47.835730] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:14.512 [2024-04-17 08:21:47.835871] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57985 ] 00:25:14.772 { 00:25:14.772 "subsystems": [ 00:25:14.772 { 00:25:14.772 "subsystem": "bdev", 00:25:14.772 "config": [ 00:25:14.772 { 00:25:14.772 "params": { 00:25:14.772 "trtype": "pcie", 00:25:14.772 "traddr": "0000:00:06.0", 00:25:14.772 "name": "Nvme0" 00:25:14.772 }, 00:25:14.772 "method": "bdev_nvme_attach_controller" 00:25:14.772 }, 00:25:14.772 { 00:25:14.772 "method": "bdev_wait_for_examine" 00:25:14.772 } 00:25:14.772 ] 00:25:14.772 } 00:25:14.772 ] 00:25:14.772 } 00:25:14.772 [2024-04-17 08:21:47.971468] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:14.772 [2024-04-17 08:21:48.072215] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:15.289  Copying: 1024/1024 [kB] (average 1000 MBps) 00:25:15.289 00:25:15.289 08:21:48 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:25:15.289 08:21:48 -- dd/basic_rw.sh@23 -- # count=15 00:25:15.289 08:21:48 -- dd/basic_rw.sh@24 -- # count=15 00:25:15.289 08:21:48 -- dd/basic_rw.sh@25 -- # size=61440 00:25:15.289 08:21:48 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:25:15.289 08:21:48 -- dd/common.sh@98 -- # xtrace_disable 00:25:15.289 08:21:48 -- common/autotest_common.sh@10 -- # set +x 00:25:15.547 08:21:48 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:25:15.547 08:21:48 -- dd/basic_rw.sh@30 -- # gen_conf 00:25:15.547 08:21:48 -- dd/common.sh@31 -- # xtrace_disable 00:25:15.547 08:21:48 -- common/autotest_common.sh@10 -- # set +x 00:25:15.804 [2024-04-17 08:21:48.918503] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:15.804 [2024-04-17 08:21:48.918661] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58003 ] 00:25:15.804 { 00:25:15.804 "subsystems": [ 00:25:15.804 { 00:25:15.804 "subsystem": "bdev", 00:25:15.804 "config": [ 00:25:15.804 { 00:25:15.804 "params": { 00:25:15.804 "trtype": "pcie", 00:25:15.804 "traddr": "0000:00:06.0", 00:25:15.804 "name": "Nvme0" 00:25:15.804 }, 00:25:15.804 "method": "bdev_nvme_attach_controller" 00:25:15.804 }, 00:25:15.804 { 00:25:15.804 "method": "bdev_wait_for_examine" 00:25:15.804 } 00:25:15.804 ] 00:25:15.804 } 00:25:15.804 ] 00:25:15.804 } 00:25:15.804 [2024-04-17 08:21:49.055995] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:16.062 [2024-04-17 08:21:49.156553] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:16.320  Copying: 60/60 [kB] (average 58 MBps) 00:25:16.320 00:25:16.320 08:21:49 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:25:16.320 08:21:49 -- dd/basic_rw.sh@37 -- # gen_conf 00:25:16.320 08:21:49 -- dd/common.sh@31 -- # xtrace_disable 00:25:16.320 08:21:49 -- common/autotest_common.sh@10 -- # set +x 00:25:16.320 [2024-04-17 08:21:49.578498] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:16.320 [2024-04-17 08:21:49.578665] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58010 ] 00:25:16.320 { 00:25:16.320 "subsystems": [ 00:25:16.320 { 00:25:16.320 "subsystem": "bdev", 00:25:16.320 "config": [ 00:25:16.320 { 00:25:16.320 "params": { 00:25:16.320 "trtype": "pcie", 00:25:16.320 "traddr": "0000:00:06.0", 00:25:16.320 "name": "Nvme0" 00:25:16.320 }, 00:25:16.320 "method": "bdev_nvme_attach_controller" 00:25:16.320 }, 00:25:16.320 { 00:25:16.320 "method": "bdev_wait_for_examine" 00:25:16.320 } 00:25:16.320 ] 00:25:16.320 } 00:25:16.320 ] 00:25:16.320 } 00:25:16.583 [2024-04-17 08:21:49.714911] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:16.583 [2024-04-17 08:21:49.817651] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:17.106  Copying: 60/60 [kB] (average 58 MBps) 00:25:17.106 00:25:17.106 08:21:50 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:17.106 08:21:50 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:25:17.106 08:21:50 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:25:17.106 08:21:50 -- dd/common.sh@11 -- # local nvme_ref= 00:25:17.107 08:21:50 -- dd/common.sh@12 -- # local size=61440 00:25:17.107 08:21:50 -- dd/common.sh@14 -- # local bs=1048576 00:25:17.107 08:21:50 -- dd/common.sh@15 -- # local count=1 00:25:17.107 08:21:50 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:25:17.107 08:21:50 -- dd/common.sh@18 -- # gen_conf 00:25:17.107 08:21:50 -- dd/common.sh@31 -- # xtrace_disable 00:25:17.107 08:21:50 -- common/autotest_common.sh@10 -- # set +x 00:25:17.107 [2024-04-17 08:21:50.242464] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:17.107 [2024-04-17 08:21:50.242617] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58029 ] 00:25:17.107 { 00:25:17.107 "subsystems": [ 00:25:17.107 { 00:25:17.107 "subsystem": "bdev", 00:25:17.107 "config": [ 00:25:17.107 { 00:25:17.107 "params": { 00:25:17.107 "trtype": "pcie", 00:25:17.107 "traddr": "0000:00:06.0", 00:25:17.107 "name": "Nvme0" 00:25:17.107 }, 00:25:17.107 "method": "bdev_nvme_attach_controller" 00:25:17.107 }, 00:25:17.107 { 00:25:17.107 "method": "bdev_wait_for_examine" 00:25:17.107 } 00:25:17.107 ] 00:25:17.107 } 00:25:17.107 ] 00:25:17.107 } 00:25:17.107 [2024-04-17 08:21:50.366271] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:17.366 [2024-04-17 08:21:50.486945] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:17.625  Copying: 1024/1024 [kB] (average 500 MBps) 00:25:17.625 00:25:17.625 08:21:50 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:25:17.625 08:21:50 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:25:17.625 08:21:50 -- dd/basic_rw.sh@23 -- # count=7 00:25:17.625 08:21:50 -- dd/basic_rw.sh@24 -- # count=7 00:25:17.625 08:21:50 -- dd/basic_rw.sh@25 -- # size=57344 00:25:17.625 08:21:50 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:25:17.625 08:21:50 -- dd/common.sh@98 -- # xtrace_disable 00:25:17.625 08:21:50 -- common/autotest_common.sh@10 -- # set +x 00:25:18.192 08:21:51 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:25:18.192 08:21:51 -- dd/basic_rw.sh@30 -- # gen_conf 00:25:18.192 08:21:51 -- dd/common.sh@31 -- # xtrace_disable 00:25:18.192 08:21:51 -- common/autotest_common.sh@10 -- # set +x 00:25:18.192 [2024-04-17 08:21:51.334517] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:18.192 [2024-04-17 08:21:51.334727] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58047 ] 00:25:18.192 { 00:25:18.192 "subsystems": [ 00:25:18.192 { 00:25:18.192 "subsystem": "bdev", 00:25:18.192 "config": [ 00:25:18.192 { 00:25:18.192 "params": { 00:25:18.192 "trtype": "pcie", 00:25:18.192 "traddr": "0000:00:06.0", 00:25:18.192 "name": "Nvme0" 00:25:18.192 }, 00:25:18.192 "method": "bdev_nvme_attach_controller" 00:25:18.192 }, 00:25:18.192 { 00:25:18.192 "method": "bdev_wait_for_examine" 00:25:18.192 } 00:25:18.192 ] 00:25:18.192 } 00:25:18.192 ] 00:25:18.192 } 00:25:18.192 [2024-04-17 08:21:51.474932] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:18.449 [2024-04-17 08:21:51.576548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:18.707  Copying: 56/56 [kB] (average 54 MBps) 00:25:18.708 00:25:18.708 08:21:51 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:25:18.708 08:21:51 -- dd/basic_rw.sh@37 -- # gen_conf 00:25:18.708 08:21:51 -- dd/common.sh@31 -- # xtrace_disable 00:25:18.708 08:21:51 -- common/autotest_common.sh@10 -- # set +x 00:25:18.708 [2024-04-17 08:21:52.016834] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:18.708 [2024-04-17 08:21:52.017023] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58065 ] 00:25:18.966 { 00:25:18.966 "subsystems": [ 00:25:18.966 { 00:25:18.966 "subsystem": "bdev", 00:25:18.966 "config": [ 00:25:18.966 { 00:25:18.966 "params": { 00:25:18.966 "trtype": "pcie", 00:25:18.966 "traddr": "0000:00:06.0", 00:25:18.966 "name": "Nvme0" 00:25:18.966 }, 00:25:18.966 "method": "bdev_nvme_attach_controller" 00:25:18.966 }, 00:25:18.966 { 00:25:18.966 "method": "bdev_wait_for_examine" 00:25:18.966 } 00:25:18.966 ] 00:25:18.966 } 00:25:18.966 ] 00:25:18.966 } 00:25:18.966 [2024-04-17 08:21:52.154262] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:18.966 [2024-04-17 08:21:52.255806] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:19.482  Copying: 56/56 [kB] (average 54 MBps) 00:25:19.482 00:25:19.482 08:21:52 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:19.482 08:21:52 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:25:19.482 08:21:52 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:25:19.482 08:21:52 -- dd/common.sh@11 -- # local nvme_ref= 00:25:19.482 08:21:52 -- dd/common.sh@12 -- # local size=57344 00:25:19.482 08:21:52 -- dd/common.sh@14 -- # local bs=1048576 00:25:19.482 08:21:52 -- dd/common.sh@15 -- # local count=1 00:25:19.482 08:21:52 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:25:19.482 08:21:52 -- dd/common.sh@18 -- # gen_conf 00:25:19.482 08:21:52 -- dd/common.sh@31 -- # xtrace_disable 00:25:19.482 08:21:52 -- common/autotest_common.sh@10 -- # set +x 00:25:19.482 [2024-04-17 08:21:52.658415] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:19.482 [2024-04-17 08:21:52.658476] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58079 ] 00:25:19.482 { 00:25:19.482 "subsystems": [ 00:25:19.482 { 00:25:19.482 "subsystem": "bdev", 00:25:19.482 "config": [ 00:25:19.482 { 00:25:19.482 "params": { 00:25:19.482 "trtype": "pcie", 00:25:19.482 "traddr": "0000:00:06.0", 00:25:19.482 "name": "Nvme0" 00:25:19.483 }, 00:25:19.483 "method": "bdev_nvme_attach_controller" 00:25:19.483 }, 00:25:19.483 { 00:25:19.483 "method": "bdev_wait_for_examine" 00:25:19.483 } 00:25:19.483 ] 00:25:19.483 } 00:25:19.483 ] 00:25:19.483 } 00:25:19.483 [2024-04-17 08:21:52.795522] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:19.743 [2024-04-17 08:21:52.884961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:20.002  Copying: 1024/1024 [kB] (average 1000 MBps) 00:25:20.002 00:25:20.002 08:21:53 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:25:20.002 08:21:53 -- dd/basic_rw.sh@23 -- # count=7 00:25:20.002 08:21:53 -- dd/basic_rw.sh@24 -- # count=7 00:25:20.002 08:21:53 -- dd/basic_rw.sh@25 -- # size=57344 00:25:20.002 08:21:53 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:25:20.002 08:21:53 -- dd/common.sh@98 -- # xtrace_disable 00:25:20.002 08:21:53 -- common/autotest_common.sh@10 -- # set +x 00:25:20.569 08:21:53 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:25:20.569 08:21:53 -- dd/basic_rw.sh@30 -- # gen_conf 00:25:20.569 08:21:53 -- dd/common.sh@31 -- # xtrace_disable 00:25:20.569 08:21:53 -- common/autotest_common.sh@10 -- # set +x 00:25:20.569 [2024-04-17 08:21:53.745658] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:20.569 [2024-04-17 08:21:53.745731] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58097 ] 00:25:20.569 { 00:25:20.569 "subsystems": [ 00:25:20.569 { 00:25:20.569 "subsystem": "bdev", 00:25:20.569 "config": [ 00:25:20.569 { 00:25:20.569 "params": { 00:25:20.569 "trtype": "pcie", 00:25:20.569 "traddr": "0000:00:06.0", 00:25:20.569 "name": "Nvme0" 00:25:20.569 }, 00:25:20.569 "method": "bdev_nvme_attach_controller" 00:25:20.569 }, 00:25:20.569 { 00:25:20.569 "method": "bdev_wait_for_examine" 00:25:20.569 } 00:25:20.569 ] 00:25:20.569 } 00:25:20.569 ] 00:25:20.569 } 00:25:20.569 [2024-04-17 08:21:53.886818] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:20.828 [2024-04-17 08:21:53.988706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:21.087  Copying: 56/56 [kB] (average 54 MBps) 00:25:21.087 00:25:21.087 08:21:54 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:25:21.087 08:21:54 -- dd/basic_rw.sh@37 -- # gen_conf 00:25:21.087 08:21:54 -- dd/common.sh@31 -- # xtrace_disable 00:25:21.087 08:21:54 -- common/autotest_common.sh@10 -- # set +x 00:25:21.087 [2024-04-17 08:21:54.405556] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:21.087 [2024-04-17 08:21:54.405631] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58109 ] 00:25:21.087 { 00:25:21.087 "subsystems": [ 00:25:21.087 { 00:25:21.087 "subsystem": "bdev", 00:25:21.087 "config": [ 00:25:21.087 { 00:25:21.087 "params": { 00:25:21.087 "trtype": "pcie", 00:25:21.087 "traddr": "0000:00:06.0", 00:25:21.087 "name": "Nvme0" 00:25:21.087 }, 00:25:21.087 "method": "bdev_nvme_attach_controller" 00:25:21.087 }, 00:25:21.087 { 00:25:21.087 "method": "bdev_wait_for_examine" 00:25:21.087 } 00:25:21.087 ] 00:25:21.087 } 00:25:21.087 ] 00:25:21.087 } 00:25:21.345 [2024-04-17 08:21:54.545206] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:21.345 [2024-04-17 08:21:54.648758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:21.862  Copying: 56/56 [kB] (average 54 MBps) 00:25:21.862 00:25:21.862 08:21:55 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:21.862 08:21:55 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:25:21.862 08:21:55 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:25:21.862 08:21:55 -- dd/common.sh@11 -- # local nvme_ref= 00:25:21.862 08:21:55 -- dd/common.sh@12 -- # local size=57344 00:25:21.863 08:21:55 -- dd/common.sh@14 -- # local bs=1048576 00:25:21.863 08:21:55 -- dd/common.sh@15 -- # local count=1 00:25:21.863 08:21:55 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:25:21.863 08:21:55 -- dd/common.sh@18 -- # gen_conf 00:25:21.863 08:21:55 -- dd/common.sh@31 -- # xtrace_disable 00:25:21.863 08:21:55 -- common/autotest_common.sh@10 -- # set +x 00:25:21.863 [2024-04-17 08:21:55.068888] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:21.863 [2024-04-17 08:21:55.069051] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58123 ] 00:25:21.863 { 00:25:21.863 "subsystems": [ 00:25:21.863 { 00:25:21.863 "subsystem": "bdev", 00:25:21.863 "config": [ 00:25:21.863 { 00:25:21.863 "params": { 00:25:21.863 "trtype": "pcie", 00:25:21.863 "traddr": "0000:00:06.0", 00:25:21.863 "name": "Nvme0" 00:25:21.863 }, 00:25:21.863 "method": "bdev_nvme_attach_controller" 00:25:21.863 }, 00:25:21.863 { 00:25:21.863 "method": "bdev_wait_for_examine" 00:25:21.863 } 00:25:21.863 ] 00:25:21.863 } 00:25:21.863 ] 00:25:21.863 } 00:25:22.122 [2024-04-17 08:21:55.206489] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:22.122 [2024-04-17 08:21:55.308505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:22.382  Copying: 1024/1024 [kB] (average 1000 MBps) 00:25:22.382 00:25:22.382 08:21:55 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:25:22.382 08:21:55 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:25:22.382 08:21:55 -- dd/basic_rw.sh@23 -- # count=3 00:25:22.382 08:21:55 -- dd/basic_rw.sh@24 -- # count=3 00:25:22.382 08:21:55 -- dd/basic_rw.sh@25 -- # size=49152 00:25:22.382 08:21:55 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:25:22.382 08:21:55 -- dd/common.sh@98 -- # xtrace_disable 00:25:22.382 08:21:55 -- common/autotest_common.sh@10 -- # set +x 00:25:22.954 08:21:56 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:25:22.954 08:21:56 -- dd/basic_rw.sh@30 -- # gen_conf 00:25:22.954 08:21:56 -- dd/common.sh@31 -- # xtrace_disable 00:25:22.954 08:21:56 -- common/autotest_common.sh@10 -- # set +x 00:25:22.954 [2024-04-17 08:21:56.089215] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:22.954 [2024-04-17 08:21:56.089419] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58141 ] 00:25:22.954 { 00:25:22.954 "subsystems": [ 00:25:22.954 { 00:25:22.954 "subsystem": "bdev", 00:25:22.954 "config": [ 00:25:22.954 { 00:25:22.954 "params": { 00:25:22.954 "trtype": "pcie", 00:25:22.954 "traddr": "0000:00:06.0", 00:25:22.954 "name": "Nvme0" 00:25:22.954 }, 00:25:22.954 "method": "bdev_nvme_attach_controller" 00:25:22.954 }, 00:25:22.954 { 00:25:22.954 "method": "bdev_wait_for_examine" 00:25:22.954 } 00:25:22.954 ] 00:25:22.954 } 00:25:22.954 ] 00:25:22.954 } 00:25:22.954 [2024-04-17 08:21:56.226591] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:23.215 [2024-04-17 08:21:56.335365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:23.474  Copying: 48/48 [kB] (average 46 MBps) 00:25:23.474 00:25:23.474 08:21:56 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:25:23.474 08:21:56 -- dd/basic_rw.sh@37 -- # gen_conf 00:25:23.474 08:21:56 -- dd/common.sh@31 -- # xtrace_disable 00:25:23.474 08:21:56 -- common/autotest_common.sh@10 -- # set +x 00:25:23.474 { 00:25:23.474 "subsystems": [ 00:25:23.474 { 00:25:23.474 "subsystem": "bdev", 00:25:23.474 "config": [ 00:25:23.474 { 00:25:23.474 "params": { 00:25:23.474 "trtype": "pcie", 00:25:23.474 "traddr": "0000:00:06.0", 00:25:23.474 "name": "Nvme0" 00:25:23.474 }, 00:25:23.474 "method": "bdev_nvme_attach_controller" 00:25:23.474 }, 00:25:23.474 { 00:25:23.474 "method": "bdev_wait_for_examine" 00:25:23.474 } 00:25:23.474 ] 00:25:23.474 } 00:25:23.474 ] 00:25:23.474 } 00:25:23.474 [2024-04-17 08:21:56.764274] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:23.474 [2024-04-17 08:21:56.764739] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58158 ] 00:25:23.733 [2024-04-17 08:21:56.904719] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:23.733 [2024-04-17 08:21:57.005487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:24.253  Copying: 48/48 [kB] (average 46 MBps) 00:25:24.253 00:25:24.253 08:21:57 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:24.253 08:21:57 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:25:24.253 08:21:57 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:25:24.253 08:21:57 -- dd/common.sh@11 -- # local nvme_ref= 00:25:24.253 08:21:57 -- dd/common.sh@12 -- # local size=49152 00:25:24.253 08:21:57 -- dd/common.sh@14 -- # local bs=1048576 00:25:24.253 08:21:57 -- dd/common.sh@15 -- # local count=1 00:25:24.253 08:21:57 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:25:24.253 08:21:57 -- dd/common.sh@18 -- # gen_conf 00:25:24.253 08:21:57 -- dd/common.sh@31 -- # xtrace_disable 00:25:24.253 08:21:57 -- common/autotest_common.sh@10 -- # set +x 00:25:24.253 [2024-04-17 08:21:57.430921] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:24.253 [2024-04-17 08:21:57.430989] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58172 ] 00:25:24.253 { 00:25:24.253 "subsystems": [ 00:25:24.253 { 00:25:24.253 "subsystem": "bdev", 00:25:24.253 "config": [ 00:25:24.253 { 00:25:24.253 "params": { 00:25:24.253 "trtype": "pcie", 00:25:24.253 "traddr": "0000:00:06.0", 00:25:24.253 "name": "Nvme0" 00:25:24.253 }, 00:25:24.253 "method": "bdev_nvme_attach_controller" 00:25:24.253 }, 00:25:24.253 { 00:25:24.253 "method": "bdev_wait_for_examine" 00:25:24.253 } 00:25:24.253 ] 00:25:24.253 } 00:25:24.253 ] 00:25:24.253 } 00:25:24.253 [2024-04-17 08:21:57.558257] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:24.510 [2024-04-17 08:21:57.664954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:24.768  Copying: 1024/1024 [kB] (average 1000 MBps) 00:25:24.768 00:25:24.768 08:21:58 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:25:24.768 08:21:58 -- dd/basic_rw.sh@23 -- # count=3 00:25:24.768 08:21:58 -- dd/basic_rw.sh@24 -- # count=3 00:25:24.768 08:21:58 -- dd/basic_rw.sh@25 -- # size=49152 00:25:24.768 08:21:58 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:25:24.768 08:21:58 -- dd/common.sh@98 -- # xtrace_disable 00:25:24.768 08:21:58 -- common/autotest_common.sh@10 -- # set +x 00:25:25.336 08:21:58 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:25:25.336 08:21:58 -- dd/basic_rw.sh@30 -- # gen_conf 00:25:25.336 08:21:58 -- dd/common.sh@31 -- # xtrace_disable 00:25:25.336 08:21:58 -- common/autotest_common.sh@10 -- # set +x 00:25:25.336 [2024-04-17 08:21:58.444374] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:25.336 [2024-04-17 08:21:58.444534] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58190 ] 00:25:25.336 { 00:25:25.336 "subsystems": [ 00:25:25.336 { 00:25:25.336 "subsystem": "bdev", 00:25:25.336 "config": [ 00:25:25.336 { 00:25:25.336 "params": { 00:25:25.336 "trtype": "pcie", 00:25:25.336 "traddr": "0000:00:06.0", 00:25:25.336 "name": "Nvme0" 00:25:25.336 }, 00:25:25.336 "method": "bdev_nvme_attach_controller" 00:25:25.336 }, 00:25:25.336 { 00:25:25.336 "method": "bdev_wait_for_examine" 00:25:25.336 } 00:25:25.336 ] 00:25:25.336 } 00:25:25.336 ] 00:25:25.336 } 00:25:25.336 [2024-04-17 08:21:58.581325] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:25.594 [2024-04-17 08:21:58.682544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:25.853  Copying: 48/48 [kB] (average 46 MBps) 00:25:25.853 00:25:25.853 08:21:59 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:25:25.853 08:21:59 -- dd/basic_rw.sh@37 -- # gen_conf 00:25:25.853 08:21:59 -- dd/common.sh@31 -- # xtrace_disable 00:25:25.853 08:21:59 -- common/autotest_common.sh@10 -- # set +x 00:25:25.853 [2024-04-17 08:21:59.086137] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:25.853 [2024-04-17 08:21:59.086327] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58203 ] 00:25:25.854 { 00:25:25.854 "subsystems": [ 00:25:25.854 { 00:25:25.854 "subsystem": "bdev", 00:25:25.854 "config": [ 00:25:25.854 { 00:25:25.854 "params": { 00:25:25.854 "trtype": "pcie", 00:25:25.854 "traddr": "0000:00:06.0", 00:25:25.854 "name": "Nvme0" 00:25:25.854 }, 00:25:25.854 "method": "bdev_nvme_attach_controller" 00:25:25.854 }, 00:25:25.854 { 00:25:25.854 "method": "bdev_wait_for_examine" 00:25:25.854 } 00:25:25.854 ] 00:25:25.854 } 00:25:25.854 ] 00:25:25.854 } 00:25:26.113 [2024-04-17 08:21:59.210908] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:26.113 [2024-04-17 08:21:59.338924] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:26.371  Copying: 48/48 [kB] (average 46 MBps) 00:25:26.371 00:25:26.629 08:21:59 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:26.629 08:21:59 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:25:26.629 08:21:59 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:25:26.629 08:21:59 -- dd/common.sh@11 -- # local nvme_ref= 00:25:26.629 08:21:59 -- dd/common.sh@12 -- # local size=49152 00:25:26.629 08:21:59 -- dd/common.sh@14 -- # local bs=1048576 00:25:26.629 08:21:59 -- dd/common.sh@15 -- # local count=1 00:25:26.629 08:21:59 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:25:26.629 08:21:59 -- dd/common.sh@18 -- # gen_conf 00:25:26.629 08:21:59 -- dd/common.sh@31 -- # xtrace_disable 00:25:26.629 08:21:59 -- common/autotest_common.sh@10 -- # set +x 00:25:26.629 { 00:25:26.629 "subsystems": [ 00:25:26.629 { 00:25:26.629 "subsystem": "bdev", 00:25:26.629 "config": [ 00:25:26.629 { 00:25:26.629 "params": { 00:25:26.630 "trtype": "pcie", 00:25:26.630 "traddr": "0000:00:06.0", 00:25:26.630 "name": "Nvme0" 00:25:26.630 }, 00:25:26.630 "method": "bdev_nvme_attach_controller" 00:25:26.630 }, 00:25:26.630 { 00:25:26.630 "method": "bdev_wait_for_examine" 00:25:26.630 } 00:25:26.630 ] 00:25:26.630 } 00:25:26.630 ] 00:25:26.630 } 00:25:26.630 [2024-04-17 08:21:59.766210] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:26.630 [2024-04-17 08:21:59.766369] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58216 ] 00:25:26.630 [2024-04-17 08:21:59.906886] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:26.887 [2024-04-17 08:22:00.020253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:27.145  Copying: 1024/1024 [kB] (average 1000 MBps) 00:25:27.145 00:25:27.145 00:25:27.145 real 0m14.369s 00:25:27.145 user 0m10.680s 00:25:27.145 sys 0m2.651s 00:25:27.145 08:22:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:27.145 08:22:00 -- common/autotest_common.sh@10 -- # set +x 00:25:27.145 ************************************ 00:25:27.145 END TEST dd_rw 00:25:27.145 ************************************ 00:25:27.145 08:22:00 -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:25:27.145 08:22:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:27.145 08:22:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:27.145 08:22:00 -- common/autotest_common.sh@10 -- # set +x 00:25:27.145 ************************************ 00:25:27.145 START TEST dd_rw_offset 00:25:27.145 ************************************ 00:25:27.145 08:22:00 -- common/autotest_common.sh@1104 -- # basic_offset 00:25:27.145 08:22:00 -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:25:27.145 08:22:00 -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:25:27.145 08:22:00 -- dd/common.sh@98 -- # xtrace_disable 00:25:27.145 08:22:00 -- common/autotest_common.sh@10 -- # set +x 00:25:27.404 08:22:00 -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:25:27.404 08:22:00 -- dd/basic_rw.sh@56 -- # data=b7y6no0aaceyvqlxz30ba0px595to8ogjyg47rq9g7klub0z6xhivuu5w2kgfez2cxqxryo0c5qgsjtin4cobgvppcx1zhv2xay332bow9g7jb2r47w9tfl05gm1qk2iezilbjdw8927llcduc8qa658t04lhd4fo9zn2a70ou8bxyss0vozd34vyyw9gmh6af9gpeu5uwi9as597mavpfmuruajfzqf6da5t6dg1w16i7j3jaqpnqtxjaoea4syysugqel15msz32837beuozmpc2dqyfy4wnurgdj8l1sg4suycmjwowxdbees3n2rbo23qg9b8ri6k5so9vrvdm6s07xmmu13s881xqaq0oose63n9px8d2lb78rjvpjno811egkhh60mdqxhm0mfp6wu5deilrluwnzm801jxuvml5haxzi6cfgqqn5kqfze5nophvddwtlsd40wlykxjs5gwp3mvjzlsa7l5el4svdimsx1ciwsl1zklc1y7tafp0brz211fj1zerluwoawc7oitko0x84ovyck6rsmbtbv1fcvn93q40vygxjhtdg5qi9tm7i7m1a7h12v2ai3dtq51p1etmpcers3eedj8n7zc0uvr55rhaahg9xwzvflv9m227652knnc23se8x4qu22b0nidm16gx9u24sddzdaz23l3qxelz1lwc6hvteu6qnzgmzjt04cboy17wxlqdy5t8dpk4sypjsbcrawux9f6aftw8uwo2w5kgcb57oalahwrb0jqft3o1o5rbzwsyy9vjn6fjmbp1nzlgtilfmcc8ayttee19np3dsouruu08ltlzyjjmjkwifs8eccunpolkxqn8yrillils89m657c5ku77y7elxnlpe5ge7ehlxyn5cjqf73ssca42li39l8g4wymgc4ykvu0ntj5gqdqnww65hc1ndk2znnjv5pim9o74fw8og1cr9984lwx8bhidxm4yndg1tbc7kyqxkx89zskergbw1xtwczgn88d2atel2xwap5yz7gci369xfmhkz66m2leamyfok6xuw7ixaeqi4aw7zyu0ryvu96wqo6il8r8h2k3nbj1u93f458ha7gbs74y7v7knbszfiw42e37t5j9uzd9ssznabtpowxddrmmroakt0lwh34zxwsdxb79om3gdlm1t67nru6e7sy5ledylsyv17t4jqj5h6nuy1s8t5fn7wpiizac5pkk332xr7lzpi69417km7ydq2kfagormkjnrzfd3mm8yl4d8sww5m4xhhg0u5rbcc9hha5veylv97e72ggdiq5gi0p69gnc5rfivdrvss7zsqr43lh61clfb0u6fsi3lpf0v87dihyns7r8ggyrp6zhly4koqrf1tmhhos5956iw93kmsubfwewnosfphlowrrpsag6z44ewulq1qr975a1t1o7o2ee4u12msoupxrlgdag4jilf124fyvy7fohup5eblaysy3bqeoa4v1qkk6opjhzwboq5uoim36cim3si40s5q8gx5n1mo3pj9ljhtibpxarh6kyokwgmbbo79mb78v5186dzwlja1g0949nrammbydm2l52300faszlstd8ieqmbe44m7bbmp7t4c1dwax2z74taltapf10ndag4e74l7vi58guvq7xwdi9jhjpkvd761a7zu0sr6e3fstbxdulll54o7evrtch00nrxxn28rczuzb0d81ahin9ra2vpx11u83tpc9lyxxndrlq6mv0jgiasbv5132ef1cefuco3sfmm97xf3s6vnzaef0ql5l8m4v7soyup84to7htspxg9lx856lu2bbjvgtsmyubg2yt1n55tyw8yt96mb3bjhxcmbk2e4l4qwwzp31pvcm8bgoni87nycj7g7exc6587w62fuorqr1tl5p5lxzqmveanq36dmlq6bis7svzvrhn66od6d0uyl2o25z2xk4m0im5umq6mzuv4hkl5vc2afw0yrk2ba4qzbhgps6n0zex10rs3vena35l4tgh6mfrjas213c40b96wsj6ft0sr9cyfomesgxpvkhyjkbxbwa4ouqsns1jyknarcklhkdcl6d0kngnupbs557baudl7y3bhq7d9vwmqk2giejlkddnumjfdojj2zz60fa89jmfn5acp2lg9x7eburm8humlk4e4weelblzt30mx0tnef8cyweotc5uarfl7o3mfsoe73hv5fjpcodpzabt0tkg19cvmrfsr3t86gp4d4yi286c4hodvfterybl1bamflzlvup2lywatczkexcfvftnl46f4t54wfaafdo4l7p9qpspsn9qtsw0scpj8el511h9pxr41hgkdkvef27vnsug5a91d3k268q3ihvvf4g6vzqli44rcqqoi4vcl9gljtjvhv60nvivucxung41ylm3fjoskzgrg7a83i78q1qhpao2l3wds12dyj6er5djp8mhx16n90l5bics365xdeubh2xjwbd4nt22hq4xb24qcr8dztb1jgdyfs89pr2sjpi92j1cofbpvuwkqz1vbbfvjlxp51g7pcrdjufw8hg9habalzl5uzuqif82ngz3yato3oyucvbr29hege286cxm12prbxzyijkj5bwex3t87ngti9ni9u0k5upmde4f2u774s7641w0u8rfl5j1i94umv1jtzcu5tugnflv4o5eo4es0ey9szjqcxfps32b1c3t609d8gqm52l7zvekc7oh73kss5v5sbci2077rcrpvn9j7uaf9yu24uer3qoadro6s828wiai7nwqouhcordr5r57z9w6lm5kljg4iep4zjv69zm4yqqrqlbihtxnsmyzcosx0xcnaxr79n37vr4knljcqrlmg57s5shaaia08tv1vcd3j0cyzwkl2t5quslp98asd3bb6g0wc8bjxq3otu8gtxem2jp8tify0uafwhu0b2wqxqqaqu89foomomfa8o1j64dieqoaabf6yhx61stedniodjrvmuuml6nvh1e2nlrl5avnnfqgqh1vha1ilpgoqsxv8b0io9bw7l7odx8vos4n5ixiuh1a82wirprfv6b3hbl3duiheg9dldpg1tk4gr590nm7g8cjq11lzqjot5q034x23ah54qf1gnmdfy5ea1v88iabb01ge2gydf26vtwp65onsx1xukc1cmrrwrt2s46kf8ik3siq40rf7j4vz4ij6fkzcxdvwvhcl2sqeayjo7x94avde3joiz7k53tt1hsvr8o5k68fn4o32puxep8wykjswobfv3y0rm4dr1bczj39fiugi7pm2r0jjyyz466gjm0wfcq19u69mea8vpqb6oe67mpwcmp3uavji5orqaywhhpipvjhbc6oq3j3maaan92z6fl79xn4kgqjjtxmo2tapbwjl7knqerr9o8wo3nx8b52z01ary0zcsl8aujzkfab26q9quarcia4gzzcqqtxek350lypztc714lw4i00wmwds9wd1k6v95bkbq6dwd2s35j3mi3ib29h0khkwgq98ukditfsyjva92b8oz47g7gvp5ge5tfhhi04rfebcf9w3r76oggsuv3juq44lvivtccy192e19h344c2udabfiejaiw4xcfhtyt2ajmpv3fk1s5jaymdlrkqw8ub1ja2vyva2mjwzt73smicu0zjmz8tmpmq71zzy6s4urd9x0n4lnzxrmr0szwaca4jhskoozq49njyr6j4060p95rzvlqalbnv6ftl44ik2fbf4g7mlntgj3vn0an1sdig1zf3bfkwxdnkhnxjhohrtpran6b3kme7q6kdbjzfgn0q4p25slo3pqnqjx1sn6z6kp2sdfascdthjqoo1k7k7f6m5h4v6ruyqsq9bxpz9qvk0gpqe17vhn7e2ex8wiwaor9oz24njord57btwtclk2z4qt0goammqlp7xs67gexi8d2k3426pgmtf2n3y91l7r17vy7dxxaiwhch83l63920ja5a0lxss89lqzfp9mwd2dnqoz6sr6lbkr2w0io79thkeuw 00:25:27.404 08:22:00 -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:25:27.404 08:22:00 -- dd/basic_rw.sh@59 -- # gen_conf 00:25:27.404 08:22:00 -- dd/common.sh@31 -- # xtrace_disable 00:25:27.404 08:22:00 -- common/autotest_common.sh@10 -- # set +x 00:25:27.404 { 00:25:27.404 "subsystems": [ 00:25:27.404 { 00:25:27.404 "subsystem": "bdev", 00:25:27.404 "config": [ 00:25:27.404 { 00:25:27.404 "params": { 00:25:27.404 "trtype": "pcie", 00:25:27.404 "traddr": "0000:00:06.0", 00:25:27.404 "name": "Nvme0" 00:25:27.404 }, 00:25:27.404 "method": "bdev_nvme_attach_controller" 00:25:27.404 }, 00:25:27.404 { 00:25:27.404 "method": "bdev_wait_for_examine" 00:25:27.404 } 00:25:27.404 ] 00:25:27.404 } 00:25:27.404 ] 00:25:27.404 } 00:25:27.404 [2024-04-17 08:22:00.558625] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:27.404 [2024-04-17 08:22:00.558754] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58250 ] 00:25:27.404 [2024-04-17 08:22:00.697169] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:27.663 [2024-04-17 08:22:00.797717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:27.920  Copying: 4096/4096 [B] (average 4000 kBps) 00:25:27.920 00:25:27.920 08:22:01 -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:25:27.920 08:22:01 -- dd/basic_rw.sh@65 -- # gen_conf 00:25:27.920 08:22:01 -- dd/common.sh@31 -- # xtrace_disable 00:25:27.920 08:22:01 -- common/autotest_common.sh@10 -- # set +x 00:25:27.920 { 00:25:27.920 "subsystems": [ 00:25:27.920 { 00:25:27.920 "subsystem": "bdev", 00:25:27.920 "config": [ 00:25:27.920 { 00:25:27.920 "params": { 00:25:27.920 "trtype": "pcie", 00:25:27.920 "traddr": "0000:00:06.0", 00:25:27.920 "name": "Nvme0" 00:25:27.920 }, 00:25:27.920 "method": "bdev_nvme_attach_controller" 00:25:27.920 }, 00:25:27.920 { 00:25:27.920 "method": "bdev_wait_for_examine" 00:25:27.920 } 00:25:27.920 ] 00:25:27.920 } 00:25:27.920 ] 00:25:27.920 } 00:25:27.920 [2024-04-17 08:22:01.214392] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:27.920 [2024-04-17 08:22:01.214456] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58264 ] 00:25:28.177 [2024-04-17 08:22:01.353611] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:28.177 [2024-04-17 08:22:01.447193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:28.694  Copying: 4096/4096 [B] (average 4000 kBps) 00:25:28.694 00:25:28.694 08:22:01 -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:25:28.694 ************************************ 00:25:28.694 END TEST dd_rw_offset 00:25:28.694 ************************************ 00:25:28.695 08:22:01 -- dd/basic_rw.sh@72 -- # [[ b7y6no0aaceyvqlxz30ba0px595to8ogjyg47rq9g7klub0z6xhivuu5w2kgfez2cxqxryo0c5qgsjtin4cobgvppcx1zhv2xay332bow9g7jb2r47w9tfl05gm1qk2iezilbjdw8927llcduc8qa658t04lhd4fo9zn2a70ou8bxyss0vozd34vyyw9gmh6af9gpeu5uwi9as597mavpfmuruajfzqf6da5t6dg1w16i7j3jaqpnqtxjaoea4syysugqel15msz32837beuozmpc2dqyfy4wnurgdj8l1sg4suycmjwowxdbees3n2rbo23qg9b8ri6k5so9vrvdm6s07xmmu13s881xqaq0oose63n9px8d2lb78rjvpjno811egkhh60mdqxhm0mfp6wu5deilrluwnzm801jxuvml5haxzi6cfgqqn5kqfze5nophvddwtlsd40wlykxjs5gwp3mvjzlsa7l5el4svdimsx1ciwsl1zklc1y7tafp0brz211fj1zerluwoawc7oitko0x84ovyck6rsmbtbv1fcvn93q40vygxjhtdg5qi9tm7i7m1a7h12v2ai3dtq51p1etmpcers3eedj8n7zc0uvr55rhaahg9xwzvflv9m227652knnc23se8x4qu22b0nidm16gx9u24sddzdaz23l3qxelz1lwc6hvteu6qnzgmzjt04cboy17wxlqdy5t8dpk4sypjsbcrawux9f6aftw8uwo2w5kgcb57oalahwrb0jqft3o1o5rbzwsyy9vjn6fjmbp1nzlgtilfmcc8ayttee19np3dsouruu08ltlzyjjmjkwifs8eccunpolkxqn8yrillils89m657c5ku77y7elxnlpe5ge7ehlxyn5cjqf73ssca42li39l8g4wymgc4ykvu0ntj5gqdqnww65hc1ndk2znnjv5pim9o74fw8og1cr9984lwx8bhidxm4yndg1tbc7kyqxkx89zskergbw1xtwczgn88d2atel2xwap5yz7gci369xfmhkz66m2leamyfok6xuw7ixaeqi4aw7zyu0ryvu96wqo6il8r8h2k3nbj1u93f458ha7gbs74y7v7knbszfiw42e37t5j9uzd9ssznabtpowxddrmmroakt0lwh34zxwsdxb79om3gdlm1t67nru6e7sy5ledylsyv17t4jqj5h6nuy1s8t5fn7wpiizac5pkk332xr7lzpi69417km7ydq2kfagormkjnrzfd3mm8yl4d8sww5m4xhhg0u5rbcc9hha5veylv97e72ggdiq5gi0p69gnc5rfivdrvss7zsqr43lh61clfb0u6fsi3lpf0v87dihyns7r8ggyrp6zhly4koqrf1tmhhos5956iw93kmsubfwewnosfphlowrrpsag6z44ewulq1qr975a1t1o7o2ee4u12msoupxrlgdag4jilf124fyvy7fohup5eblaysy3bqeoa4v1qkk6opjhzwboq5uoim36cim3si40s5q8gx5n1mo3pj9ljhtibpxarh6kyokwgmbbo79mb78v5186dzwlja1g0949nrammbydm2l52300faszlstd8ieqmbe44m7bbmp7t4c1dwax2z74taltapf10ndag4e74l7vi58guvq7xwdi9jhjpkvd761a7zu0sr6e3fstbxdulll54o7evrtch00nrxxn28rczuzb0d81ahin9ra2vpx11u83tpc9lyxxndrlq6mv0jgiasbv5132ef1cefuco3sfmm97xf3s6vnzaef0ql5l8m4v7soyup84to7htspxg9lx856lu2bbjvgtsmyubg2yt1n55tyw8yt96mb3bjhxcmbk2e4l4qwwzp31pvcm8bgoni87nycj7g7exc6587w62fuorqr1tl5p5lxzqmveanq36dmlq6bis7svzvrhn66od6d0uyl2o25z2xk4m0im5umq6mzuv4hkl5vc2afw0yrk2ba4qzbhgps6n0zex10rs3vena35l4tgh6mfrjas213c40b96wsj6ft0sr9cyfomesgxpvkhyjkbxbwa4ouqsns1jyknarcklhkdcl6d0kngnupbs557baudl7y3bhq7d9vwmqk2giejlkddnumjfdojj2zz60fa89jmfn5acp2lg9x7eburm8humlk4e4weelblzt30mx0tnef8cyweotc5uarfl7o3mfsoe73hv5fjpcodpzabt0tkg19cvmrfsr3t86gp4d4yi286c4hodvfterybl1bamflzlvup2lywatczkexcfvftnl46f4t54wfaafdo4l7p9qpspsn9qtsw0scpj8el511h9pxr41hgkdkvef27vnsug5a91d3k268q3ihvvf4g6vzqli44rcqqoi4vcl9gljtjvhv60nvivucxung41ylm3fjoskzgrg7a83i78q1qhpao2l3wds12dyj6er5djp8mhx16n90l5bics365xdeubh2xjwbd4nt22hq4xb24qcr8dztb1jgdyfs89pr2sjpi92j1cofbpvuwkqz1vbbfvjlxp51g7pcrdjufw8hg9habalzl5uzuqif82ngz3yato3oyucvbr29hege286cxm12prbxzyijkj5bwex3t87ngti9ni9u0k5upmde4f2u774s7641w0u8rfl5j1i94umv1jtzcu5tugnflv4o5eo4es0ey9szjqcxfps32b1c3t609d8gqm52l7zvekc7oh73kss5v5sbci2077rcrpvn9j7uaf9yu24uer3qoadro6s828wiai7nwqouhcordr5r57z9w6lm5kljg4iep4zjv69zm4yqqrqlbihtxnsmyzcosx0xcnaxr79n37vr4knljcqrlmg57s5shaaia08tv1vcd3j0cyzwkl2t5quslp98asd3bb6g0wc8bjxq3otu8gtxem2jp8tify0uafwhu0b2wqxqqaqu89foomomfa8o1j64dieqoaabf6yhx61stedniodjrvmuuml6nvh1e2nlrl5avnnfqgqh1vha1ilpgoqsxv8b0io9bw7l7odx8vos4n5ixiuh1a82wirprfv6b3hbl3duiheg9dldpg1tk4gr590nm7g8cjq11lzqjot5q034x23ah54qf1gnmdfy5ea1v88iabb01ge2gydf26vtwp65onsx1xukc1cmrrwrt2s46kf8ik3siq40rf7j4vz4ij6fkzcxdvwvhcl2sqeayjo7x94avde3joiz7k53tt1hsvr8o5k68fn4o32puxep8wykjswobfv3y0rm4dr1bczj39fiugi7pm2r0jjyyz466gjm0wfcq19u69mea8vpqb6oe67mpwcmp3uavji5orqaywhhpipvjhbc6oq3j3maaan92z6fl79xn4kgqjjtxmo2tapbwjl7knqerr9o8wo3nx8b52z01ary0zcsl8aujzkfab26q9quarcia4gzzcqqtxek350lypztc714lw4i00wmwds9wd1k6v95bkbq6dwd2s35j3mi3ib29h0khkwgq98ukditfsyjva92b8oz47g7gvp5ge5tfhhi04rfebcf9w3r76oggsuv3juq44lvivtccy192e19h344c2udabfiejaiw4xcfhtyt2ajmpv3fk1s5jaymdlrkqw8ub1ja2vyva2mjwzt73smicu0zjmz8tmpmq71zzy6s4urd9x0n4lnzxrmr0szwaca4jhskoozq49njyr6j4060p95rzvlqalbnv6ftl44ik2fbf4g7mlntgj3vn0an1sdig1zf3bfkwxdnkhnxjhohrtpran6b3kme7q6kdbjzfgn0q4p25slo3pqnqjx1sn6z6kp2sdfascdthjqoo1k7k7f6m5h4v6ruyqsq9bxpz9qvk0gpqe17vhn7e2ex8wiwaor9oz24njord57btwtclk2z4qt0goammqlp7xs67gexi8d2k3426pgmtf2n3y91l7r17vy7dxxaiwhch83l63920ja5a0lxss89lqzfp9mwd2dnqoz6sr6lbkr2w0io79thkeuw == \b\7\y\6\n\o\0\a\a\c\e\y\v\q\l\x\z\3\0\b\a\0\p\x\5\9\5\t\o\8\o\g\j\y\g\4\7\r\q\9\g\7\k\l\u\b\0\z\6\x\h\i\v\u\u\5\w\2\k\g\f\e\z\2\c\x\q\x\r\y\o\0\c\5\q\g\s\j\t\i\n\4\c\o\b\g\v\p\p\c\x\1\z\h\v\2\x\a\y\3\3\2\b\o\w\9\g\7\j\b\2\r\4\7\w\9\t\f\l\0\5\g\m\1\q\k\2\i\e\z\i\l\b\j\d\w\8\9\2\7\l\l\c\d\u\c\8\q\a\6\5\8\t\0\4\l\h\d\4\f\o\9\z\n\2\a\7\0\o\u\8\b\x\y\s\s\0\v\o\z\d\3\4\v\y\y\w\9\g\m\h\6\a\f\9\g\p\e\u\5\u\w\i\9\a\s\5\9\7\m\a\v\p\f\m\u\r\u\a\j\f\z\q\f\6\d\a\5\t\6\d\g\1\w\1\6\i\7\j\3\j\a\q\p\n\q\t\x\j\a\o\e\a\4\s\y\y\s\u\g\q\e\l\1\5\m\s\z\3\2\8\3\7\b\e\u\o\z\m\p\c\2\d\q\y\f\y\4\w\n\u\r\g\d\j\8\l\1\s\g\4\s\u\y\c\m\j\w\o\w\x\d\b\e\e\s\3\n\2\r\b\o\2\3\q\g\9\b\8\r\i\6\k\5\s\o\9\v\r\v\d\m\6\s\0\7\x\m\m\u\1\3\s\8\8\1\x\q\a\q\0\o\o\s\e\6\3\n\9\p\x\8\d\2\l\b\7\8\r\j\v\p\j\n\o\8\1\1\e\g\k\h\h\6\0\m\d\q\x\h\m\0\m\f\p\6\w\u\5\d\e\i\l\r\l\u\w\n\z\m\8\0\1\j\x\u\v\m\l\5\h\a\x\z\i\6\c\f\g\q\q\n\5\k\q\f\z\e\5\n\o\p\h\v\d\d\w\t\l\s\d\4\0\w\l\y\k\x\j\s\5\g\w\p\3\m\v\j\z\l\s\a\7\l\5\e\l\4\s\v\d\i\m\s\x\1\c\i\w\s\l\1\z\k\l\c\1\y\7\t\a\f\p\0\b\r\z\2\1\1\f\j\1\z\e\r\l\u\w\o\a\w\c\7\o\i\t\k\o\0\x\8\4\o\v\y\c\k\6\r\s\m\b\t\b\v\1\f\c\v\n\9\3\q\4\0\v\y\g\x\j\h\t\d\g\5\q\i\9\t\m\7\i\7\m\1\a\7\h\1\2\v\2\a\i\3\d\t\q\5\1\p\1\e\t\m\p\c\e\r\s\3\e\e\d\j\8\n\7\z\c\0\u\v\r\5\5\r\h\a\a\h\g\9\x\w\z\v\f\l\v\9\m\2\2\7\6\5\2\k\n\n\c\2\3\s\e\8\x\4\q\u\2\2\b\0\n\i\d\m\1\6\g\x\9\u\2\4\s\d\d\z\d\a\z\2\3\l\3\q\x\e\l\z\1\l\w\c\6\h\v\t\e\u\6\q\n\z\g\m\z\j\t\0\4\c\b\o\y\1\7\w\x\l\q\d\y\5\t\8\d\p\k\4\s\y\p\j\s\b\c\r\a\w\u\x\9\f\6\a\f\t\w\8\u\w\o\2\w\5\k\g\c\b\5\7\o\a\l\a\h\w\r\b\0\j\q\f\t\3\o\1\o\5\r\b\z\w\s\y\y\9\v\j\n\6\f\j\m\b\p\1\n\z\l\g\t\i\l\f\m\c\c\8\a\y\t\t\e\e\1\9\n\p\3\d\s\o\u\r\u\u\0\8\l\t\l\z\y\j\j\m\j\k\w\i\f\s\8\e\c\c\u\n\p\o\l\k\x\q\n\8\y\r\i\l\l\i\l\s\8\9\m\6\5\7\c\5\k\u\7\7\y\7\e\l\x\n\l\p\e\5\g\e\7\e\h\l\x\y\n\5\c\j\q\f\7\3\s\s\c\a\4\2\l\i\3\9\l\8\g\4\w\y\m\g\c\4\y\k\v\u\0\n\t\j\5\g\q\d\q\n\w\w\6\5\h\c\1\n\d\k\2\z\n\n\j\v\5\p\i\m\9\o\7\4\f\w\8\o\g\1\c\r\9\9\8\4\l\w\x\8\b\h\i\d\x\m\4\y\n\d\g\1\t\b\c\7\k\y\q\x\k\x\8\9\z\s\k\e\r\g\b\w\1\x\t\w\c\z\g\n\8\8\d\2\a\t\e\l\2\x\w\a\p\5\y\z\7\g\c\i\3\6\9\x\f\m\h\k\z\6\6\m\2\l\e\a\m\y\f\o\k\6\x\u\w\7\i\x\a\e\q\i\4\a\w\7\z\y\u\0\r\y\v\u\9\6\w\q\o\6\i\l\8\r\8\h\2\k\3\n\b\j\1\u\9\3\f\4\5\8\h\a\7\g\b\s\7\4\y\7\v\7\k\n\b\s\z\f\i\w\4\2\e\3\7\t\5\j\9\u\z\d\9\s\s\z\n\a\b\t\p\o\w\x\d\d\r\m\m\r\o\a\k\t\0\l\w\h\3\4\z\x\w\s\d\x\b\7\9\o\m\3\g\d\l\m\1\t\6\7\n\r\u\6\e\7\s\y\5\l\e\d\y\l\s\y\v\1\7\t\4\j\q\j\5\h\6\n\u\y\1\s\8\t\5\f\n\7\w\p\i\i\z\a\c\5\p\k\k\3\3\2\x\r\7\l\z\p\i\6\9\4\1\7\k\m\7\y\d\q\2\k\f\a\g\o\r\m\k\j\n\r\z\f\d\3\m\m\8\y\l\4\d\8\s\w\w\5\m\4\x\h\h\g\0\u\5\r\b\c\c\9\h\h\a\5\v\e\y\l\v\9\7\e\7\2\g\g\d\i\q\5\g\i\0\p\6\9\g\n\c\5\r\f\i\v\d\r\v\s\s\7\z\s\q\r\4\3\l\h\6\1\c\l\f\b\0\u\6\f\s\i\3\l\p\f\0\v\8\7\d\i\h\y\n\s\7\r\8\g\g\y\r\p\6\z\h\l\y\4\k\o\q\r\f\1\t\m\h\h\o\s\5\9\5\6\i\w\9\3\k\m\s\u\b\f\w\e\w\n\o\s\f\p\h\l\o\w\r\r\p\s\a\g\6\z\4\4\e\w\u\l\q\1\q\r\9\7\5\a\1\t\1\o\7\o\2\e\e\4\u\1\2\m\s\o\u\p\x\r\l\g\d\a\g\4\j\i\l\f\1\2\4\f\y\v\y\7\f\o\h\u\p\5\e\b\l\a\y\s\y\3\b\q\e\o\a\4\v\1\q\k\k\6\o\p\j\h\z\w\b\o\q\5\u\o\i\m\3\6\c\i\m\3\s\i\4\0\s\5\q\8\g\x\5\n\1\m\o\3\p\j\9\l\j\h\t\i\b\p\x\a\r\h\6\k\y\o\k\w\g\m\b\b\o\7\9\m\b\7\8\v\5\1\8\6\d\z\w\l\j\a\1\g\0\9\4\9\n\r\a\m\m\b\y\d\m\2\l\5\2\3\0\0\f\a\s\z\l\s\t\d\8\i\e\q\m\b\e\4\4\m\7\b\b\m\p\7\t\4\c\1\d\w\a\x\2\z\7\4\t\a\l\t\a\p\f\1\0\n\d\a\g\4\e\7\4\l\7\v\i\5\8\g\u\v\q\7\x\w\d\i\9\j\h\j\p\k\v\d\7\6\1\a\7\z\u\0\s\r\6\e\3\f\s\t\b\x\d\u\l\l\l\5\4\o\7\e\v\r\t\c\h\0\0\n\r\x\x\n\2\8\r\c\z\u\z\b\0\d\8\1\a\h\i\n\9\r\a\2\v\p\x\1\1\u\8\3\t\p\c\9\l\y\x\x\n\d\r\l\q\6\m\v\0\j\g\i\a\s\b\v\5\1\3\2\e\f\1\c\e\f\u\c\o\3\s\f\m\m\9\7\x\f\3\s\6\v\n\z\a\e\f\0\q\l\5\l\8\m\4\v\7\s\o\y\u\p\8\4\t\o\7\h\t\s\p\x\g\9\l\x\8\5\6\l\u\2\b\b\j\v\g\t\s\m\y\u\b\g\2\y\t\1\n\5\5\t\y\w\8\y\t\9\6\m\b\3\b\j\h\x\c\m\b\k\2\e\4\l\4\q\w\w\z\p\3\1\p\v\c\m\8\b\g\o\n\i\8\7\n\y\c\j\7\g\7\e\x\c\6\5\8\7\w\6\2\f\u\o\r\q\r\1\t\l\5\p\5\l\x\z\q\m\v\e\a\n\q\3\6\d\m\l\q\6\b\i\s\7\s\v\z\v\r\h\n\6\6\o\d\6\d\0\u\y\l\2\o\2\5\z\2\x\k\4\m\0\i\m\5\u\m\q\6\m\z\u\v\4\h\k\l\5\v\c\2\a\f\w\0\y\r\k\2\b\a\4\q\z\b\h\g\p\s\6\n\0\z\e\x\1\0\r\s\3\v\e\n\a\3\5\l\4\t\g\h\6\m\f\r\j\a\s\2\1\3\c\4\0\b\9\6\w\s\j\6\f\t\0\s\r\9\c\y\f\o\m\e\s\g\x\p\v\k\h\y\j\k\b\x\b\w\a\4\o\u\q\s\n\s\1\j\y\k\n\a\r\c\k\l\h\k\d\c\l\6\d\0\k\n\g\n\u\p\b\s\5\5\7\b\a\u\d\l\7\y\3\b\h\q\7\d\9\v\w\m\q\k\2\g\i\e\j\l\k\d\d\n\u\m\j\f\d\o\j\j\2\z\z\6\0\f\a\8\9\j\m\f\n\5\a\c\p\2\l\g\9\x\7\e\b\u\r\m\8\h\u\m\l\k\4\e\4\w\e\e\l\b\l\z\t\3\0\m\x\0\t\n\e\f\8\c\y\w\e\o\t\c\5\u\a\r\f\l\7\o\3\m\f\s\o\e\7\3\h\v\5\f\j\p\c\o\d\p\z\a\b\t\0\t\k\g\1\9\c\v\m\r\f\s\r\3\t\8\6\g\p\4\d\4\y\i\2\8\6\c\4\h\o\d\v\f\t\e\r\y\b\l\1\b\a\m\f\l\z\l\v\u\p\2\l\y\w\a\t\c\z\k\e\x\c\f\v\f\t\n\l\4\6\f\4\t\5\4\w\f\a\a\f\d\o\4\l\7\p\9\q\p\s\p\s\n\9\q\t\s\w\0\s\c\p\j\8\e\l\5\1\1\h\9\p\x\r\4\1\h\g\k\d\k\v\e\f\2\7\v\n\s\u\g\5\a\9\1\d\3\k\2\6\8\q\3\i\h\v\v\f\4\g\6\v\z\q\l\i\4\4\r\c\q\q\o\i\4\v\c\l\9\g\l\j\t\j\v\h\v\6\0\n\v\i\v\u\c\x\u\n\g\4\1\y\l\m\3\f\j\o\s\k\z\g\r\g\7\a\8\3\i\7\8\q\1\q\h\p\a\o\2\l\3\w\d\s\1\2\d\y\j\6\e\r\5\d\j\p\8\m\h\x\1\6\n\9\0\l\5\b\i\c\s\3\6\5\x\d\e\u\b\h\2\x\j\w\b\d\4\n\t\2\2\h\q\4\x\b\2\4\q\c\r\8\d\z\t\b\1\j\g\d\y\f\s\8\9\p\r\2\s\j\p\i\9\2\j\1\c\o\f\b\p\v\u\w\k\q\z\1\v\b\b\f\v\j\l\x\p\5\1\g\7\p\c\r\d\j\u\f\w\8\h\g\9\h\a\b\a\l\z\l\5\u\z\u\q\i\f\8\2\n\g\z\3\y\a\t\o\3\o\y\u\c\v\b\r\2\9\h\e\g\e\2\8\6\c\x\m\1\2\p\r\b\x\z\y\i\j\k\j\5\b\w\e\x\3\t\8\7\n\g\t\i\9\n\i\9\u\0\k\5\u\p\m\d\e\4\f\2\u\7\7\4\s\7\6\4\1\w\0\u\8\r\f\l\5\j\1\i\9\4\u\m\v\1\j\t\z\c\u\5\t\u\g\n\f\l\v\4\o\5\e\o\4\e\s\0\e\y\9\s\z\j\q\c\x\f\p\s\3\2\b\1\c\3\t\6\0\9\d\8\g\q\m\5\2\l\7\z\v\e\k\c\7\o\h\7\3\k\s\s\5\v\5\s\b\c\i\2\0\7\7\r\c\r\p\v\n\9\j\7\u\a\f\9\y\u\2\4\u\e\r\3\q\o\a\d\r\o\6\s\8\2\8\w\i\a\i\7\n\w\q\o\u\h\c\o\r\d\r\5\r\5\7\z\9\w\6\l\m\5\k\l\j\g\4\i\e\p\4\z\j\v\6\9\z\m\4\y\q\q\r\q\l\b\i\h\t\x\n\s\m\y\z\c\o\s\x\0\x\c\n\a\x\r\7\9\n\3\7\v\r\4\k\n\l\j\c\q\r\l\m\g\5\7\s\5\s\h\a\a\i\a\0\8\t\v\1\v\c\d\3\j\0\c\y\z\w\k\l\2\t\5\q\u\s\l\p\9\8\a\s\d\3\b\b\6\g\0\w\c\8\b\j\x\q\3\o\t\u\8\g\t\x\e\m\2\j\p\8\t\i\f\y\0\u\a\f\w\h\u\0\b\2\w\q\x\q\q\a\q\u\8\9\f\o\o\m\o\m\f\a\8\o\1\j\6\4\d\i\e\q\o\a\a\b\f\6\y\h\x\6\1\s\t\e\d\n\i\o\d\j\r\v\m\u\u\m\l\6\n\v\h\1\e\2\n\l\r\l\5\a\v\n\n\f\q\g\q\h\1\v\h\a\1\i\l\p\g\o\q\s\x\v\8\b\0\i\o\9\b\w\7\l\7\o\d\x\8\v\o\s\4\n\5\i\x\i\u\h\1\a\8\2\w\i\r\p\r\f\v\6\b\3\h\b\l\3\d\u\i\h\e\g\9\d\l\d\p\g\1\t\k\4\g\r\5\9\0\n\m\7\g\8\c\j\q\1\1\l\z\q\j\o\t\5\q\0\3\4\x\2\3\a\h\5\4\q\f\1\g\n\m\d\f\y\5\e\a\1\v\8\8\i\a\b\b\0\1\g\e\2\g\y\d\f\2\6\v\t\w\p\6\5\o\n\s\x\1\x\u\k\c\1\c\m\r\r\w\r\t\2\s\4\6\k\f\8\i\k\3\s\i\q\4\0\r\f\7\j\4\v\z\4\i\j\6\f\k\z\c\x\d\v\w\v\h\c\l\2\s\q\e\a\y\j\o\7\x\9\4\a\v\d\e\3\j\o\i\z\7\k\5\3\t\t\1\h\s\v\r\8\o\5\k\6\8\f\n\4\o\3\2\p\u\x\e\p\8\w\y\k\j\s\w\o\b\f\v\3\y\0\r\m\4\d\r\1\b\c\z\j\3\9\f\i\u\g\i\7\p\m\2\r\0\j\j\y\y\z\4\6\6\g\j\m\0\w\f\c\q\1\9\u\6\9\m\e\a\8\v\p\q\b\6\o\e\6\7\m\p\w\c\m\p\3\u\a\v\j\i\5\o\r\q\a\y\w\h\h\p\i\p\v\j\h\b\c\6\o\q\3\j\3\m\a\a\a\n\9\2\z\6\f\l\7\9\x\n\4\k\g\q\j\j\t\x\m\o\2\t\a\p\b\w\j\l\7\k\n\q\e\r\r\9\o\8\w\o\3\n\x\8\b\5\2\z\0\1\a\r\y\0\z\c\s\l\8\a\u\j\z\k\f\a\b\2\6\q\9\q\u\a\r\c\i\a\4\g\z\z\c\q\q\t\x\e\k\3\5\0\l\y\p\z\t\c\7\1\4\l\w\4\i\0\0\w\m\w\d\s\9\w\d\1\k\6\v\9\5\b\k\b\q\6\d\w\d\2\s\3\5\j\3\m\i\3\i\b\2\9\h\0\k\h\k\w\g\q\9\8\u\k\d\i\t\f\s\y\j\v\a\9\2\b\8\o\z\4\7\g\7\g\v\p\5\g\e\5\t\f\h\h\i\0\4\r\f\e\b\c\f\9\w\3\r\7\6\o\g\g\s\u\v\3\j\u\q\4\4\l\v\i\v\t\c\c\y\1\9\2\e\1\9\h\3\4\4\c\2\u\d\a\b\f\i\e\j\a\i\w\4\x\c\f\h\t\y\t\2\a\j\m\p\v\3\f\k\1\s\5\j\a\y\m\d\l\r\k\q\w\8\u\b\1\j\a\2\v\y\v\a\2\m\j\w\z\t\7\3\s\m\i\c\u\0\z\j\m\z\8\t\m\p\m\q\7\1\z\z\y\6\s\4\u\r\d\9\x\0\n\4\l\n\z\x\r\m\r\0\s\z\w\a\c\a\4\j\h\s\k\o\o\z\q\4\9\n\j\y\r\6\j\4\0\6\0\p\9\5\r\z\v\l\q\a\l\b\n\v\6\f\t\l\4\4\i\k\2\f\b\f\4\g\7\m\l\n\t\g\j\3\v\n\0\a\n\1\s\d\i\g\1\z\f\3\b\f\k\w\x\d\n\k\h\n\x\j\h\o\h\r\t\p\r\a\n\6\b\3\k\m\e\7\q\6\k\d\b\j\z\f\g\n\0\q\4\p\2\5\s\l\o\3\p\q\n\q\j\x\1\s\n\6\z\6\k\p\2\s\d\f\a\s\c\d\t\h\j\q\o\o\1\k\7\k\7\f\6\m\5\h\4\v\6\r\u\y\q\s\q\9\b\x\p\z\9\q\v\k\0\g\p\q\e\1\7\v\h\n\7\e\2\e\x\8\w\i\w\a\o\r\9\o\z\2\4\n\j\o\r\d\5\7\b\t\w\t\c\l\k\2\z\4\q\t\0\g\o\a\m\m\q\l\p\7\x\s\6\7\g\e\x\i\8\d\2\k\3\4\2\6\p\g\m\t\f\2\n\3\y\9\1\l\7\r\1\7\v\y\7\d\x\x\a\i\w\h\c\h\8\3\l\6\3\9\2\0\j\a\5\a\0\l\x\s\s\8\9\l\q\z\f\p\9\m\w\d\2\d\n\q\o\z\6\s\r\6\l\b\k\r\2\w\0\i\o\7\9\t\h\k\e\u\w ]] 00:25:28.695 00:25:28.695 real 0m1.356s 00:25:28.695 user 0m0.941s 00:25:28.695 sys 0m0.275s 00:25:28.695 08:22:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:28.695 08:22:01 -- common/autotest_common.sh@10 -- # set +x 00:25:28.695 08:22:01 -- dd/basic_rw.sh@1 -- # cleanup 00:25:28.695 08:22:01 -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:25:28.695 08:22:01 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:25:28.695 08:22:01 -- dd/common.sh@11 -- # local nvme_ref= 00:25:28.695 08:22:01 -- dd/common.sh@12 -- # local size=0xffff 00:25:28.695 08:22:01 -- dd/common.sh@14 -- # local bs=1048576 00:25:28.695 08:22:01 -- dd/common.sh@15 -- # local count=1 00:25:28.695 08:22:01 -- dd/common.sh@18 -- # gen_conf 00:25:28.695 08:22:01 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:25:28.695 08:22:01 -- dd/common.sh@31 -- # xtrace_disable 00:25:28.695 08:22:01 -- common/autotest_common.sh@10 -- # set +x 00:25:28.695 [2024-04-17 08:22:01.915828] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:28.695 [2024-04-17 08:22:01.915896] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58297 ] 00:25:28.695 { 00:25:28.695 "subsystems": [ 00:25:28.695 { 00:25:28.695 "subsystem": "bdev", 00:25:28.695 "config": [ 00:25:28.695 { 00:25:28.695 "params": { 00:25:28.695 "trtype": "pcie", 00:25:28.695 "traddr": "0000:00:06.0", 00:25:28.695 "name": "Nvme0" 00:25:28.695 }, 00:25:28.695 "method": "bdev_nvme_attach_controller" 00:25:28.695 }, 00:25:28.695 { 00:25:28.695 "method": "bdev_wait_for_examine" 00:25:28.695 } 00:25:28.695 ] 00:25:28.695 } 00:25:28.695 ] 00:25:28.695 } 00:25:28.953 [2024-04-17 08:22:02.053986] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:28.953 [2024-04-17 08:22:02.157974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:29.210  Copying: 1024/1024 [kB] (average 1000 MBps) 00:25:29.210 00:25:29.210 08:22:02 -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:29.210 ************************************ 00:25:29.210 END TEST spdk_dd_basic_rw 00:25:29.210 ************************************ 00:25:29.210 00:25:29.210 real 0m17.587s 00:25:29.210 user 0m12.755s 00:25:29.210 sys 0m3.466s 00:25:29.210 08:22:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:29.210 08:22:02 -- common/autotest_common.sh@10 -- # set +x 00:25:29.471 08:22:02 -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:25:29.471 08:22:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:29.471 08:22:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:29.471 08:22:02 -- common/autotest_common.sh@10 -- # set +x 00:25:29.471 ************************************ 00:25:29.471 START TEST spdk_dd_posix 00:25:29.471 ************************************ 00:25:29.471 08:22:02 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:25:29.471 * Looking for test storage... 00:25:29.471 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:25:29.471 08:22:02 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:29.471 08:22:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:29.471 08:22:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:29.471 08:22:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:29.471 08:22:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.471 08:22:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.471 08:22:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.471 08:22:02 -- paths/export.sh@5 -- # export PATH 00:25:29.471 08:22:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.471 08:22:02 -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:25:29.471 08:22:02 -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:25:29.471 08:22:02 -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:25:29.471 08:22:02 -- dd/posix.sh@125 -- # trap cleanup EXIT 00:25:29.471 08:22:02 -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:25:29.471 08:22:02 -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:29.471 08:22:02 -- dd/posix.sh@130 -- # tests 00:25:29.471 08:22:02 -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:25:29.471 * First test run, liburing in use 00:25:29.471 08:22:02 -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:25:29.471 08:22:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:29.471 08:22:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:29.471 08:22:02 -- common/autotest_common.sh@10 -- # set +x 00:25:29.471 ************************************ 00:25:29.471 START TEST dd_flag_append 00:25:29.471 ************************************ 00:25:29.471 08:22:02 -- common/autotest_common.sh@1104 -- # append 00:25:29.471 08:22:02 -- dd/posix.sh@16 -- # local dump0 00:25:29.471 08:22:02 -- dd/posix.sh@17 -- # local dump1 00:25:29.471 08:22:02 -- dd/posix.sh@19 -- # gen_bytes 32 00:25:29.471 08:22:02 -- dd/common.sh@98 -- # xtrace_disable 00:25:29.471 08:22:02 -- common/autotest_common.sh@10 -- # set +x 00:25:29.472 08:22:02 -- dd/posix.sh@19 -- # dump0=r05g1ywuemim08iz949eu98jbq8uv17s 00:25:29.472 08:22:02 -- dd/posix.sh@20 -- # gen_bytes 32 00:25:29.472 08:22:02 -- dd/common.sh@98 -- # xtrace_disable 00:25:29.472 08:22:02 -- common/autotest_common.sh@10 -- # set +x 00:25:29.472 08:22:02 -- dd/posix.sh@20 -- # dump1=bcbpor31gmoagyp7kau7aslfctxftynm 00:25:29.472 08:22:02 -- dd/posix.sh@22 -- # printf %s r05g1ywuemim08iz949eu98jbq8uv17s 00:25:29.472 08:22:02 -- dd/posix.sh@23 -- # printf %s bcbpor31gmoagyp7kau7aslfctxftynm 00:25:29.472 08:22:02 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:25:29.736 [2024-04-17 08:22:02.813681] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:29.736 [2024-04-17 08:22:02.813854] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58353 ] 00:25:29.736 [2024-04-17 08:22:02.955471] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:29.736 [2024-04-17 08:22:03.061667] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:30.252  Copying: 32/32 [B] (average 31 kBps) 00:25:30.252 00:25:30.253 08:22:03 -- dd/posix.sh@27 -- # [[ bcbpor31gmoagyp7kau7aslfctxftynmr05g1ywuemim08iz949eu98jbq8uv17s == \b\c\b\p\o\r\3\1\g\m\o\a\g\y\p\7\k\a\u\7\a\s\l\f\c\t\x\f\t\y\n\m\r\0\5\g\1\y\w\u\e\m\i\m\0\8\i\z\9\4\9\e\u\9\8\j\b\q\8\u\v\1\7\s ]] 00:25:30.253 00:25:30.253 real 0m0.609s 00:25:30.253 user 0m0.353s 00:25:30.253 sys 0m0.135s 00:25:30.253 08:22:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:30.253 08:22:03 -- common/autotest_common.sh@10 -- # set +x 00:25:30.253 ************************************ 00:25:30.253 END TEST dd_flag_append 00:25:30.253 ************************************ 00:25:30.253 08:22:03 -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:25:30.253 08:22:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:30.253 08:22:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:30.253 08:22:03 -- common/autotest_common.sh@10 -- # set +x 00:25:30.253 ************************************ 00:25:30.253 START TEST dd_flag_directory 00:25:30.253 ************************************ 00:25:30.253 08:22:03 -- common/autotest_common.sh@1104 -- # directory 00:25:30.253 08:22:03 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:25:30.253 08:22:03 -- common/autotest_common.sh@640 -- # local es=0 00:25:30.253 08:22:03 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:25:30.253 08:22:03 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:30.253 08:22:03 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:30.253 08:22:03 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:30.253 08:22:03 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:30.253 08:22:03 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:30.253 08:22:03 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:30.253 08:22:03 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:30.253 08:22:03 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:25:30.253 08:22:03 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:25:30.253 [2024-04-17 08:22:03.483222] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:30.253 [2024-04-17 08:22:03.483415] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58380 ] 00:25:30.511 [2024-04-17 08:22:03.622407] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:30.511 [2024-04-17 08:22:03.726585] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:30.511 [2024-04-17 08:22:03.795774] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:25:30.511 [2024-04-17 08:22:03.795820] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:25:30.511 [2024-04-17 08:22:03.795829] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:30.769 [2024-04-17 08:22:03.888066] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:25:30.769 08:22:03 -- common/autotest_common.sh@643 -- # es=236 00:25:30.769 08:22:04 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:30.769 08:22:04 -- common/autotest_common.sh@652 -- # es=108 00:25:30.769 08:22:04 -- common/autotest_common.sh@653 -- # case "$es" in 00:25:30.769 08:22:04 -- common/autotest_common.sh@660 -- # es=1 00:25:30.769 08:22:04 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:30.769 08:22:04 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:25:30.769 08:22:04 -- common/autotest_common.sh@640 -- # local es=0 00:25:30.769 08:22:04 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:25:30.769 08:22:04 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:30.769 08:22:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:30.769 08:22:04 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:30.769 08:22:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:30.769 08:22:04 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:30.769 08:22:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:30.769 08:22:04 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:30.769 08:22:04 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:25:30.769 08:22:04 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:25:30.769 [2024-04-17 08:22:04.061314] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:30.769 [2024-04-17 08:22:04.061483] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58395 ] 00:25:31.027 [2024-04-17 08:22:04.199259] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:31.027 [2024-04-17 08:22:04.302087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:31.286 [2024-04-17 08:22:04.369451] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:25:31.286 [2024-04-17 08:22:04.369588] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:25:31.286 [2024-04-17 08:22:04.369629] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:31.286 [2024-04-17 08:22:04.462808] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:25:31.286 ************************************ 00:25:31.286 END TEST dd_flag_directory 00:25:31.286 ************************************ 00:25:31.286 08:22:04 -- common/autotest_common.sh@643 -- # es=236 00:25:31.286 08:22:04 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:31.286 08:22:04 -- common/autotest_common.sh@652 -- # es=108 00:25:31.286 08:22:04 -- common/autotest_common.sh@653 -- # case "$es" in 00:25:31.286 08:22:04 -- common/autotest_common.sh@660 -- # es=1 00:25:31.286 08:22:04 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:31.286 00:25:31.286 real 0m1.154s 00:25:31.286 user 0m0.682s 00:25:31.286 sys 0m0.261s 00:25:31.286 08:22:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:31.286 08:22:04 -- common/autotest_common.sh@10 -- # set +x 00:25:31.545 08:22:04 -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:25:31.545 08:22:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:31.545 08:22:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:31.545 08:22:04 -- common/autotest_common.sh@10 -- # set +x 00:25:31.545 ************************************ 00:25:31.545 START TEST dd_flag_nofollow 00:25:31.545 ************************************ 00:25:31.545 08:22:04 -- common/autotest_common.sh@1104 -- # nofollow 00:25:31.545 08:22:04 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:25:31.545 08:22:04 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:25:31.545 08:22:04 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:25:31.545 08:22:04 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:25:31.545 08:22:04 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:31.545 08:22:04 -- common/autotest_common.sh@640 -- # local es=0 00:25:31.545 08:22:04 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:31.545 08:22:04 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:31.545 08:22:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:31.545 08:22:04 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:31.545 08:22:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:31.545 08:22:04 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:31.545 08:22:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:31.545 08:22:04 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:31.545 08:22:04 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:25:31.545 08:22:04 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:31.545 [2024-04-17 08:22:04.691160] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:31.545 [2024-04-17 08:22:04.691344] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58418 ] 00:25:31.545 [2024-04-17 08:22:04.830228] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:31.803 [2024-04-17 08:22:04.927157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:31.803 [2024-04-17 08:22:04.994729] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:25:31.803 [2024-04-17 08:22:04.994875] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:25:31.803 [2024-04-17 08:22:04.994912] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:31.803 [2024-04-17 08:22:05.089039] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:25:32.061 08:22:05 -- common/autotest_common.sh@643 -- # es=216 00:25:32.061 08:22:05 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:32.061 08:22:05 -- common/autotest_common.sh@652 -- # es=88 00:25:32.061 08:22:05 -- common/autotest_common.sh@653 -- # case "$es" in 00:25:32.061 08:22:05 -- common/autotest_common.sh@660 -- # es=1 00:25:32.061 08:22:05 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:32.061 08:22:05 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:25:32.061 08:22:05 -- common/autotest_common.sh@640 -- # local es=0 00:25:32.061 08:22:05 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:25:32.061 08:22:05 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:32.061 08:22:05 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:32.061 08:22:05 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:32.061 08:22:05 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:32.061 08:22:05 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:32.062 08:22:05 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:32.062 08:22:05 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:32.062 08:22:05 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:25:32.062 08:22:05 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:25:32.062 [2024-04-17 08:22:05.256962] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:32.062 [2024-04-17 08:22:05.257035] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58433 ] 00:25:32.320 [2024-04-17 08:22:05.395684] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:32.320 [2024-04-17 08:22:05.499076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:32.320 [2024-04-17 08:22:05.568113] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:25:32.320 [2024-04-17 08:22:05.568158] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:25:32.320 [2024-04-17 08:22:05.568170] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:32.578 [2024-04-17 08:22:05.661105] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:25:32.578 08:22:05 -- common/autotest_common.sh@643 -- # es=216 00:25:32.578 08:22:05 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:32.578 08:22:05 -- common/autotest_common.sh@652 -- # es=88 00:25:32.578 08:22:05 -- common/autotest_common.sh@653 -- # case "$es" in 00:25:32.578 08:22:05 -- common/autotest_common.sh@660 -- # es=1 00:25:32.578 08:22:05 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:32.578 08:22:05 -- dd/posix.sh@46 -- # gen_bytes 512 00:25:32.578 08:22:05 -- dd/common.sh@98 -- # xtrace_disable 00:25:32.578 08:22:05 -- common/autotest_common.sh@10 -- # set +x 00:25:32.578 08:22:05 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:32.578 [2024-04-17 08:22:05.832136] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:32.578 [2024-04-17 08:22:05.832258] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58439 ] 00:25:32.836 [2024-04-17 08:22:05.970744] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:32.836 [2024-04-17 08:22:06.068796] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:33.093  Copying: 512/512 [B] (average 500 kBps) 00:25:33.093 00:25:33.093 08:22:06 -- dd/posix.sh@49 -- # [[ 2daoixefxi55lwn5s166kzhw1xobaki9qpqd7daf77vls0e3asa7q81qet0ctkhv3awfymvdb2j8ico6ryiatnotsu67e3ms9mzux1bfq8on4pobksb56jf21xi6qiu2gway0j9ag57hegwg6jeenvi3mpony4h2yquh6r14knnhhztu5uj5wszuu6i0kmku4ixk7s9gf444egmgx1rm5o460lbkdbl40rsg6j2xj2l8p466veq85juy676yqpj8dbzpuqpewfqhsvg1f6w8pwtablfys57zivmqlx09rsv02st4b4g8z1vgpwzlz421pii8t53mzzgc5hs7x9k24fqb33oare3cofjeoc5r520e2ile5ny4u58ypxq0btsp6cvwsb042xsbxu52xpn1j2nnt7se4pzryjiq0esx9sd9x5jntarw5jojhz8hmmw55vwra6s7n7425yra0h3a0aur4uonng5r8r0zjjwgjd60lma7l0uahghqf6ijqe38 == \2\d\a\o\i\x\e\f\x\i\5\5\l\w\n\5\s\1\6\6\k\z\h\w\1\x\o\b\a\k\i\9\q\p\q\d\7\d\a\f\7\7\v\l\s\0\e\3\a\s\a\7\q\8\1\q\e\t\0\c\t\k\h\v\3\a\w\f\y\m\v\d\b\2\j\8\i\c\o\6\r\y\i\a\t\n\o\t\s\u\6\7\e\3\m\s\9\m\z\u\x\1\b\f\q\8\o\n\4\p\o\b\k\s\b\5\6\j\f\2\1\x\i\6\q\i\u\2\g\w\a\y\0\j\9\a\g\5\7\h\e\g\w\g\6\j\e\e\n\v\i\3\m\p\o\n\y\4\h\2\y\q\u\h\6\r\1\4\k\n\n\h\h\z\t\u\5\u\j\5\w\s\z\u\u\6\i\0\k\m\k\u\4\i\x\k\7\s\9\g\f\4\4\4\e\g\m\g\x\1\r\m\5\o\4\6\0\l\b\k\d\b\l\4\0\r\s\g\6\j\2\x\j\2\l\8\p\4\6\6\v\e\q\8\5\j\u\y\6\7\6\y\q\p\j\8\d\b\z\p\u\q\p\e\w\f\q\h\s\v\g\1\f\6\w\8\p\w\t\a\b\l\f\y\s\5\7\z\i\v\m\q\l\x\0\9\r\s\v\0\2\s\t\4\b\4\g\8\z\1\v\g\p\w\z\l\z\4\2\1\p\i\i\8\t\5\3\m\z\z\g\c\5\h\s\7\x\9\k\2\4\f\q\b\3\3\o\a\r\e\3\c\o\f\j\e\o\c\5\r\5\2\0\e\2\i\l\e\5\n\y\4\u\5\8\y\p\x\q\0\b\t\s\p\6\c\v\w\s\b\0\4\2\x\s\b\x\u\5\2\x\p\n\1\j\2\n\n\t\7\s\e\4\p\z\r\y\j\i\q\0\e\s\x\9\s\d\9\x\5\j\n\t\a\r\w\5\j\o\j\h\z\8\h\m\m\w\5\5\v\w\r\a\6\s\7\n\7\4\2\5\y\r\a\0\h\3\a\0\a\u\r\4\u\o\n\n\g\5\r\8\r\0\z\j\j\w\g\j\d\6\0\l\m\a\7\l\0\u\a\h\g\h\q\f\6\i\j\q\e\3\8 ]] 00:25:33.093 00:25:33.093 real 0m1.728s 00:25:33.093 user 0m1.051s 00:25:33.093 sys 0m0.345s 00:25:33.093 08:22:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:33.093 08:22:06 -- common/autotest_common.sh@10 -- # set +x 00:25:33.093 ************************************ 00:25:33.093 END TEST dd_flag_nofollow 00:25:33.093 ************************************ 00:25:33.094 08:22:06 -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:25:33.094 08:22:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:33.094 08:22:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:33.094 08:22:06 -- common/autotest_common.sh@10 -- # set +x 00:25:33.352 ************************************ 00:25:33.352 START TEST dd_flag_noatime 00:25:33.352 ************************************ 00:25:33.352 08:22:06 -- common/autotest_common.sh@1104 -- # noatime 00:25:33.352 08:22:06 -- dd/posix.sh@53 -- # local atime_if 00:25:33.352 08:22:06 -- dd/posix.sh@54 -- # local atime_of 00:25:33.352 08:22:06 -- dd/posix.sh@58 -- # gen_bytes 512 00:25:33.352 08:22:06 -- dd/common.sh@98 -- # xtrace_disable 00:25:33.352 08:22:06 -- common/autotest_common.sh@10 -- # set +x 00:25:33.352 08:22:06 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:25:33.352 08:22:06 -- dd/posix.sh@60 -- # atime_if=1713342126 00:25:33.352 08:22:06 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:33.352 08:22:06 -- dd/posix.sh@61 -- # atime_of=1713342126 00:25:33.352 08:22:06 -- dd/posix.sh@66 -- # sleep 1 00:25:34.286 08:22:07 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:34.286 [2024-04-17 08:22:07.510979] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:34.286 [2024-04-17 08:22:07.511125] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58484 ] 00:25:34.545 [2024-04-17 08:22:07.665384] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:34.545 [2024-04-17 08:22:07.766459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:34.803  Copying: 512/512 [B] (average 500 kBps) 00:25:34.803 00:25:34.803 08:22:08 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:25:34.803 08:22:08 -- dd/posix.sh@69 -- # (( atime_if == 1713342126 )) 00:25:34.803 08:22:08 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:34.803 08:22:08 -- dd/posix.sh@70 -- # (( atime_of == 1713342126 )) 00:25:34.803 08:22:08 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:34.803 [2024-04-17 08:22:08.132825] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:34.803 [2024-04-17 08:22:08.132907] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58501 ] 00:25:35.062 [2024-04-17 08:22:08.272449] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:35.062 [2024-04-17 08:22:08.375995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:35.577  Copying: 512/512 [B] (average 500 kBps) 00:25:35.577 00:25:35.577 08:22:08 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:25:35.577 ************************************ 00:25:35.577 END TEST dd_flag_noatime 00:25:35.577 ************************************ 00:25:35.577 08:22:08 -- dd/posix.sh@73 -- # (( atime_if < 1713342128 )) 00:25:35.577 00:25:35.577 real 0m2.249s 00:25:35.577 user 0m0.730s 00:25:35.577 sys 0m0.267s 00:25:35.578 08:22:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:35.578 08:22:08 -- common/autotest_common.sh@10 -- # set +x 00:25:35.578 08:22:08 -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:25:35.578 08:22:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:35.578 08:22:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:35.578 08:22:08 -- common/autotest_common.sh@10 -- # set +x 00:25:35.578 ************************************ 00:25:35.578 START TEST dd_flags_misc 00:25:35.578 ************************************ 00:25:35.578 08:22:08 -- common/autotest_common.sh@1104 -- # io 00:25:35.578 08:22:08 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:25:35.578 08:22:08 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:25:35.578 08:22:08 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:25:35.578 08:22:08 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:25:35.578 08:22:08 -- dd/posix.sh@86 -- # gen_bytes 512 00:25:35.578 08:22:08 -- dd/common.sh@98 -- # xtrace_disable 00:25:35.578 08:22:08 -- common/autotest_common.sh@10 -- # set +x 00:25:35.578 08:22:08 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:25:35.578 08:22:08 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:25:35.578 [2024-04-17 08:22:08.797253] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:35.578 [2024-04-17 08:22:08.797415] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58522 ] 00:25:35.847 [2024-04-17 08:22:08.933323] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:35.848 [2024-04-17 08:22:09.035857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:36.109  Copying: 512/512 [B] (average 500 kBps) 00:25:36.109 00:25:36.110 08:22:09 -- dd/posix.sh@93 -- # [[ 7f6fa1gsfamwxt97a44zxhkbvhdapg5br0httspdp2gp6m3u7k7t5h82rsksobsceejjscgbswwtrc2cjm2i66746ko4d648k92to5ohqd18jlyw0u2ghfno3zk805yloqe90ang9ovclh25z00ygt0m9i8bwk2mo0i77yqq6wcaircssvx8eh7gqt892343i91wb3bqptoyd3c27sfy9grrhjbxlte283ea6an6r9eqd3j0erunoy7w2x80iu9o2iv05yuq1gi7g0wvtip5jxmpka3f7fphtpeontgsnw4e9bb9x0ch1c7jqeweu321eadd6g7dqiic9nfizngzyi9wxfoidrizwl449z992chbcyu6kg08fc687o7b0znx1dpedd0b25cbwbszqmxypfv2f62iufdx297r0rgw333oicc1546s89k13gprnkbpnu3vijlve58ptlrmzgfo6612vf4vq6zrrbd7jw338t97esttoglqelj8xpf54pdv == \7\f\6\f\a\1\g\s\f\a\m\w\x\t\9\7\a\4\4\z\x\h\k\b\v\h\d\a\p\g\5\b\r\0\h\t\t\s\p\d\p\2\g\p\6\m\3\u\7\k\7\t\5\h\8\2\r\s\k\s\o\b\s\c\e\e\j\j\s\c\g\b\s\w\w\t\r\c\2\c\j\m\2\i\6\6\7\4\6\k\o\4\d\6\4\8\k\9\2\t\o\5\o\h\q\d\1\8\j\l\y\w\0\u\2\g\h\f\n\o\3\z\k\8\0\5\y\l\o\q\e\9\0\a\n\g\9\o\v\c\l\h\2\5\z\0\0\y\g\t\0\m\9\i\8\b\w\k\2\m\o\0\i\7\7\y\q\q\6\w\c\a\i\r\c\s\s\v\x\8\e\h\7\g\q\t\8\9\2\3\4\3\i\9\1\w\b\3\b\q\p\t\o\y\d\3\c\2\7\s\f\y\9\g\r\r\h\j\b\x\l\t\e\2\8\3\e\a\6\a\n\6\r\9\e\q\d\3\j\0\e\r\u\n\o\y\7\w\2\x\8\0\i\u\9\o\2\i\v\0\5\y\u\q\1\g\i\7\g\0\w\v\t\i\p\5\j\x\m\p\k\a\3\f\7\f\p\h\t\p\e\o\n\t\g\s\n\w\4\e\9\b\b\9\x\0\c\h\1\c\7\j\q\e\w\e\u\3\2\1\e\a\d\d\6\g\7\d\q\i\i\c\9\n\f\i\z\n\g\z\y\i\9\w\x\f\o\i\d\r\i\z\w\l\4\4\9\z\9\9\2\c\h\b\c\y\u\6\k\g\0\8\f\c\6\8\7\o\7\b\0\z\n\x\1\d\p\e\d\d\0\b\2\5\c\b\w\b\s\z\q\m\x\y\p\f\v\2\f\6\2\i\u\f\d\x\2\9\7\r\0\r\g\w\3\3\3\o\i\c\c\1\5\4\6\s\8\9\k\1\3\g\p\r\n\k\b\p\n\u\3\v\i\j\l\v\e\5\8\p\t\l\r\m\z\g\f\o\6\6\1\2\v\f\4\v\q\6\z\r\r\b\d\7\j\w\3\3\8\t\9\7\e\s\t\t\o\g\l\q\e\l\j\8\x\p\f\5\4\p\d\v ]] 00:25:36.110 08:22:09 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:25:36.110 08:22:09 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:25:36.110 [2024-04-17 08:22:09.379992] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:36.110 [2024-04-17 08:22:09.380070] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58535 ] 00:25:36.368 [2024-04-17 08:22:09.517958] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:36.368 [2024-04-17 08:22:09.618524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:36.627  Copying: 512/512 [B] (average 500 kBps) 00:25:36.627 00:25:36.627 08:22:09 -- dd/posix.sh@93 -- # [[ 7f6fa1gsfamwxt97a44zxhkbvhdapg5br0httspdp2gp6m3u7k7t5h82rsksobsceejjscgbswwtrc2cjm2i66746ko4d648k92to5ohqd18jlyw0u2ghfno3zk805yloqe90ang9ovclh25z00ygt0m9i8bwk2mo0i77yqq6wcaircssvx8eh7gqt892343i91wb3bqptoyd3c27sfy9grrhjbxlte283ea6an6r9eqd3j0erunoy7w2x80iu9o2iv05yuq1gi7g0wvtip5jxmpka3f7fphtpeontgsnw4e9bb9x0ch1c7jqeweu321eadd6g7dqiic9nfizngzyi9wxfoidrizwl449z992chbcyu6kg08fc687o7b0znx1dpedd0b25cbwbszqmxypfv2f62iufdx297r0rgw333oicc1546s89k13gprnkbpnu3vijlve58ptlrmzgfo6612vf4vq6zrrbd7jw338t97esttoglqelj8xpf54pdv == \7\f\6\f\a\1\g\s\f\a\m\w\x\t\9\7\a\4\4\z\x\h\k\b\v\h\d\a\p\g\5\b\r\0\h\t\t\s\p\d\p\2\g\p\6\m\3\u\7\k\7\t\5\h\8\2\r\s\k\s\o\b\s\c\e\e\j\j\s\c\g\b\s\w\w\t\r\c\2\c\j\m\2\i\6\6\7\4\6\k\o\4\d\6\4\8\k\9\2\t\o\5\o\h\q\d\1\8\j\l\y\w\0\u\2\g\h\f\n\o\3\z\k\8\0\5\y\l\o\q\e\9\0\a\n\g\9\o\v\c\l\h\2\5\z\0\0\y\g\t\0\m\9\i\8\b\w\k\2\m\o\0\i\7\7\y\q\q\6\w\c\a\i\r\c\s\s\v\x\8\e\h\7\g\q\t\8\9\2\3\4\3\i\9\1\w\b\3\b\q\p\t\o\y\d\3\c\2\7\s\f\y\9\g\r\r\h\j\b\x\l\t\e\2\8\3\e\a\6\a\n\6\r\9\e\q\d\3\j\0\e\r\u\n\o\y\7\w\2\x\8\0\i\u\9\o\2\i\v\0\5\y\u\q\1\g\i\7\g\0\w\v\t\i\p\5\j\x\m\p\k\a\3\f\7\f\p\h\t\p\e\o\n\t\g\s\n\w\4\e\9\b\b\9\x\0\c\h\1\c\7\j\q\e\w\e\u\3\2\1\e\a\d\d\6\g\7\d\q\i\i\c\9\n\f\i\z\n\g\z\y\i\9\w\x\f\o\i\d\r\i\z\w\l\4\4\9\z\9\9\2\c\h\b\c\y\u\6\k\g\0\8\f\c\6\8\7\o\7\b\0\z\n\x\1\d\p\e\d\d\0\b\2\5\c\b\w\b\s\z\q\m\x\y\p\f\v\2\f\6\2\i\u\f\d\x\2\9\7\r\0\r\g\w\3\3\3\o\i\c\c\1\5\4\6\s\8\9\k\1\3\g\p\r\n\k\b\p\n\u\3\v\i\j\l\v\e\5\8\p\t\l\r\m\z\g\f\o\6\6\1\2\v\f\4\v\q\6\z\r\r\b\d\7\j\w\3\3\8\t\9\7\e\s\t\t\o\g\l\q\e\l\j\8\x\p\f\5\4\p\d\v ]] 00:25:36.627 08:22:09 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:25:36.627 08:22:09 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:25:36.627 [2024-04-17 08:22:09.957334] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:36.627 [2024-04-17 08:22:09.957410] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58537 ] 00:25:36.885 [2024-04-17 08:22:10.097231] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:36.885 [2024-04-17 08:22:10.196535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:37.404  Copying: 512/512 [B] (average 100 kBps) 00:25:37.404 00:25:37.404 08:22:10 -- dd/posix.sh@93 -- # [[ 7f6fa1gsfamwxt97a44zxhkbvhdapg5br0httspdp2gp6m3u7k7t5h82rsksobsceejjscgbswwtrc2cjm2i66746ko4d648k92to5ohqd18jlyw0u2ghfno3zk805yloqe90ang9ovclh25z00ygt0m9i8bwk2mo0i77yqq6wcaircssvx8eh7gqt892343i91wb3bqptoyd3c27sfy9grrhjbxlte283ea6an6r9eqd3j0erunoy7w2x80iu9o2iv05yuq1gi7g0wvtip5jxmpka3f7fphtpeontgsnw4e9bb9x0ch1c7jqeweu321eadd6g7dqiic9nfizngzyi9wxfoidrizwl449z992chbcyu6kg08fc687o7b0znx1dpedd0b25cbwbszqmxypfv2f62iufdx297r0rgw333oicc1546s89k13gprnkbpnu3vijlve58ptlrmzgfo6612vf4vq6zrrbd7jw338t97esttoglqelj8xpf54pdv == \7\f\6\f\a\1\g\s\f\a\m\w\x\t\9\7\a\4\4\z\x\h\k\b\v\h\d\a\p\g\5\b\r\0\h\t\t\s\p\d\p\2\g\p\6\m\3\u\7\k\7\t\5\h\8\2\r\s\k\s\o\b\s\c\e\e\j\j\s\c\g\b\s\w\w\t\r\c\2\c\j\m\2\i\6\6\7\4\6\k\o\4\d\6\4\8\k\9\2\t\o\5\o\h\q\d\1\8\j\l\y\w\0\u\2\g\h\f\n\o\3\z\k\8\0\5\y\l\o\q\e\9\0\a\n\g\9\o\v\c\l\h\2\5\z\0\0\y\g\t\0\m\9\i\8\b\w\k\2\m\o\0\i\7\7\y\q\q\6\w\c\a\i\r\c\s\s\v\x\8\e\h\7\g\q\t\8\9\2\3\4\3\i\9\1\w\b\3\b\q\p\t\o\y\d\3\c\2\7\s\f\y\9\g\r\r\h\j\b\x\l\t\e\2\8\3\e\a\6\a\n\6\r\9\e\q\d\3\j\0\e\r\u\n\o\y\7\w\2\x\8\0\i\u\9\o\2\i\v\0\5\y\u\q\1\g\i\7\g\0\w\v\t\i\p\5\j\x\m\p\k\a\3\f\7\f\p\h\t\p\e\o\n\t\g\s\n\w\4\e\9\b\b\9\x\0\c\h\1\c\7\j\q\e\w\e\u\3\2\1\e\a\d\d\6\g\7\d\q\i\i\c\9\n\f\i\z\n\g\z\y\i\9\w\x\f\o\i\d\r\i\z\w\l\4\4\9\z\9\9\2\c\h\b\c\y\u\6\k\g\0\8\f\c\6\8\7\o\7\b\0\z\n\x\1\d\p\e\d\d\0\b\2\5\c\b\w\b\s\z\q\m\x\y\p\f\v\2\f\6\2\i\u\f\d\x\2\9\7\r\0\r\g\w\3\3\3\o\i\c\c\1\5\4\6\s\8\9\k\1\3\g\p\r\n\k\b\p\n\u\3\v\i\j\l\v\e\5\8\p\t\l\r\m\z\g\f\o\6\6\1\2\v\f\4\v\q\6\z\r\r\b\d\7\j\w\3\3\8\t\9\7\e\s\t\t\o\g\l\q\e\l\j\8\x\p\f\5\4\p\d\v ]] 00:25:37.404 08:22:10 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:25:37.404 08:22:10 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:25:37.404 [2024-04-17 08:22:10.552648] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:37.404 [2024-04-17 08:22:10.552710] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58550 ] 00:25:37.404 [2024-04-17 08:22:10.688616] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:37.663 [2024-04-17 08:22:10.789506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:37.922  Copying: 512/512 [B] (average 166 kBps) 00:25:37.922 00:25:37.922 08:22:11 -- dd/posix.sh@93 -- # [[ 7f6fa1gsfamwxt97a44zxhkbvhdapg5br0httspdp2gp6m3u7k7t5h82rsksobsceejjscgbswwtrc2cjm2i66746ko4d648k92to5ohqd18jlyw0u2ghfno3zk805yloqe90ang9ovclh25z00ygt0m9i8bwk2mo0i77yqq6wcaircssvx8eh7gqt892343i91wb3bqptoyd3c27sfy9grrhjbxlte283ea6an6r9eqd3j0erunoy7w2x80iu9o2iv05yuq1gi7g0wvtip5jxmpka3f7fphtpeontgsnw4e9bb9x0ch1c7jqeweu321eadd6g7dqiic9nfizngzyi9wxfoidrizwl449z992chbcyu6kg08fc687o7b0znx1dpedd0b25cbwbszqmxypfv2f62iufdx297r0rgw333oicc1546s89k13gprnkbpnu3vijlve58ptlrmzgfo6612vf4vq6zrrbd7jw338t97esttoglqelj8xpf54pdv == \7\f\6\f\a\1\g\s\f\a\m\w\x\t\9\7\a\4\4\z\x\h\k\b\v\h\d\a\p\g\5\b\r\0\h\t\t\s\p\d\p\2\g\p\6\m\3\u\7\k\7\t\5\h\8\2\r\s\k\s\o\b\s\c\e\e\j\j\s\c\g\b\s\w\w\t\r\c\2\c\j\m\2\i\6\6\7\4\6\k\o\4\d\6\4\8\k\9\2\t\o\5\o\h\q\d\1\8\j\l\y\w\0\u\2\g\h\f\n\o\3\z\k\8\0\5\y\l\o\q\e\9\0\a\n\g\9\o\v\c\l\h\2\5\z\0\0\y\g\t\0\m\9\i\8\b\w\k\2\m\o\0\i\7\7\y\q\q\6\w\c\a\i\r\c\s\s\v\x\8\e\h\7\g\q\t\8\9\2\3\4\3\i\9\1\w\b\3\b\q\p\t\o\y\d\3\c\2\7\s\f\y\9\g\r\r\h\j\b\x\l\t\e\2\8\3\e\a\6\a\n\6\r\9\e\q\d\3\j\0\e\r\u\n\o\y\7\w\2\x\8\0\i\u\9\o\2\i\v\0\5\y\u\q\1\g\i\7\g\0\w\v\t\i\p\5\j\x\m\p\k\a\3\f\7\f\p\h\t\p\e\o\n\t\g\s\n\w\4\e\9\b\b\9\x\0\c\h\1\c\7\j\q\e\w\e\u\3\2\1\e\a\d\d\6\g\7\d\q\i\i\c\9\n\f\i\z\n\g\z\y\i\9\w\x\f\o\i\d\r\i\z\w\l\4\4\9\z\9\9\2\c\h\b\c\y\u\6\k\g\0\8\f\c\6\8\7\o\7\b\0\z\n\x\1\d\p\e\d\d\0\b\2\5\c\b\w\b\s\z\q\m\x\y\p\f\v\2\f\6\2\i\u\f\d\x\2\9\7\r\0\r\g\w\3\3\3\o\i\c\c\1\5\4\6\s\8\9\k\1\3\g\p\r\n\k\b\p\n\u\3\v\i\j\l\v\e\5\8\p\t\l\r\m\z\g\f\o\6\6\1\2\v\f\4\v\q\6\z\r\r\b\d\7\j\w\3\3\8\t\9\7\e\s\t\t\o\g\l\q\e\l\j\8\x\p\f\5\4\p\d\v ]] 00:25:37.922 08:22:11 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:25:37.922 08:22:11 -- dd/posix.sh@86 -- # gen_bytes 512 00:25:37.922 08:22:11 -- dd/common.sh@98 -- # xtrace_disable 00:25:37.922 08:22:11 -- common/autotest_common.sh@10 -- # set +x 00:25:37.922 08:22:11 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:25:37.922 08:22:11 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:25:37.922 [2024-04-17 08:22:11.133074] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:37.922 [2024-04-17 08:22:11.133155] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58560 ] 00:25:38.181 [2024-04-17 08:22:11.283091] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:38.181 [2024-04-17 08:22:11.379040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:38.441  Copying: 512/512 [B] (average 500 kBps) 00:25:38.441 00:25:38.441 08:22:11 -- dd/posix.sh@93 -- # [[ rh5d6ah0hcrheeteialqkp60kxi1ghzp6b0wrlro0xxic3mnmyyze2h4msw2ubvpexa99nq2rm485434ikxtcyjf0i5zd01ydwhtvxqqk0tyy82w276samzc2udw43yc70047udkgau4j44nzx5127du2w2jz61fkhccj7z5h7pzriiht6pcjxndxkcj3c5pstholkb4ui1w5x11uayz84iecrrdevhw7dagca0f7yfw7q1hhknfkaayn35ek4dp0ozorawyrpx8ysshto695881lywk8t0rp4j939o6b6ro9o53ul1l1thgdia51kva6ynk5eskgtbhhtjgpgw3bintr87dtlxm3zpvi3ruab7umi4mmkfhfo3y9ts8y1a88s8tqqrzvby4nxds9m69tw2plegkcjj14zg6hgczmj5t0xj138g1um08oz15o5oyjn3w7ckdvjs6i7phw3ft1dk4pqsm2awx9q4583e9hepw6pdhvb1seh0wwto4bdn4 == \r\h\5\d\6\a\h\0\h\c\r\h\e\e\t\e\i\a\l\q\k\p\6\0\k\x\i\1\g\h\z\p\6\b\0\w\r\l\r\o\0\x\x\i\c\3\m\n\m\y\y\z\e\2\h\4\m\s\w\2\u\b\v\p\e\x\a\9\9\n\q\2\r\m\4\8\5\4\3\4\i\k\x\t\c\y\j\f\0\i\5\z\d\0\1\y\d\w\h\t\v\x\q\q\k\0\t\y\y\8\2\w\2\7\6\s\a\m\z\c\2\u\d\w\4\3\y\c\7\0\0\4\7\u\d\k\g\a\u\4\j\4\4\n\z\x\5\1\2\7\d\u\2\w\2\j\z\6\1\f\k\h\c\c\j\7\z\5\h\7\p\z\r\i\i\h\t\6\p\c\j\x\n\d\x\k\c\j\3\c\5\p\s\t\h\o\l\k\b\4\u\i\1\w\5\x\1\1\u\a\y\z\8\4\i\e\c\r\r\d\e\v\h\w\7\d\a\g\c\a\0\f\7\y\f\w\7\q\1\h\h\k\n\f\k\a\a\y\n\3\5\e\k\4\d\p\0\o\z\o\r\a\w\y\r\p\x\8\y\s\s\h\t\o\6\9\5\8\8\1\l\y\w\k\8\t\0\r\p\4\j\9\3\9\o\6\b\6\r\o\9\o\5\3\u\l\1\l\1\t\h\g\d\i\a\5\1\k\v\a\6\y\n\k\5\e\s\k\g\t\b\h\h\t\j\g\p\g\w\3\b\i\n\t\r\8\7\d\t\l\x\m\3\z\p\v\i\3\r\u\a\b\7\u\m\i\4\m\m\k\f\h\f\o\3\y\9\t\s\8\y\1\a\8\8\s\8\t\q\q\r\z\v\b\y\4\n\x\d\s\9\m\6\9\t\w\2\p\l\e\g\k\c\j\j\1\4\z\g\6\h\g\c\z\m\j\5\t\0\x\j\1\3\8\g\1\u\m\0\8\o\z\1\5\o\5\o\y\j\n\3\w\7\c\k\d\v\j\s\6\i\7\p\h\w\3\f\t\1\d\k\4\p\q\s\m\2\a\w\x\9\q\4\5\8\3\e\9\h\e\p\w\6\p\d\h\v\b\1\s\e\h\0\w\w\t\o\4\b\d\n\4 ]] 00:25:38.441 08:22:11 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:25:38.441 08:22:11 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:25:38.441 [2024-04-17 08:22:11.714544] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:38.441 [2024-04-17 08:22:11.714627] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58567 ] 00:25:38.700 [2024-04-17 08:22:11.852519] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:38.700 [2024-04-17 08:22:11.953356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:38.960  Copying: 512/512 [B] (average 500 kBps) 00:25:38.960 00:25:38.960 08:22:12 -- dd/posix.sh@93 -- # [[ rh5d6ah0hcrheeteialqkp60kxi1ghzp6b0wrlro0xxic3mnmyyze2h4msw2ubvpexa99nq2rm485434ikxtcyjf0i5zd01ydwhtvxqqk0tyy82w276samzc2udw43yc70047udkgau4j44nzx5127du2w2jz61fkhccj7z5h7pzriiht6pcjxndxkcj3c5pstholkb4ui1w5x11uayz84iecrrdevhw7dagca0f7yfw7q1hhknfkaayn35ek4dp0ozorawyrpx8ysshto695881lywk8t0rp4j939o6b6ro9o53ul1l1thgdia51kva6ynk5eskgtbhhtjgpgw3bintr87dtlxm3zpvi3ruab7umi4mmkfhfo3y9ts8y1a88s8tqqrzvby4nxds9m69tw2plegkcjj14zg6hgczmj5t0xj138g1um08oz15o5oyjn3w7ckdvjs6i7phw3ft1dk4pqsm2awx9q4583e9hepw6pdhvb1seh0wwto4bdn4 == \r\h\5\d\6\a\h\0\h\c\r\h\e\e\t\e\i\a\l\q\k\p\6\0\k\x\i\1\g\h\z\p\6\b\0\w\r\l\r\o\0\x\x\i\c\3\m\n\m\y\y\z\e\2\h\4\m\s\w\2\u\b\v\p\e\x\a\9\9\n\q\2\r\m\4\8\5\4\3\4\i\k\x\t\c\y\j\f\0\i\5\z\d\0\1\y\d\w\h\t\v\x\q\q\k\0\t\y\y\8\2\w\2\7\6\s\a\m\z\c\2\u\d\w\4\3\y\c\7\0\0\4\7\u\d\k\g\a\u\4\j\4\4\n\z\x\5\1\2\7\d\u\2\w\2\j\z\6\1\f\k\h\c\c\j\7\z\5\h\7\p\z\r\i\i\h\t\6\p\c\j\x\n\d\x\k\c\j\3\c\5\p\s\t\h\o\l\k\b\4\u\i\1\w\5\x\1\1\u\a\y\z\8\4\i\e\c\r\r\d\e\v\h\w\7\d\a\g\c\a\0\f\7\y\f\w\7\q\1\h\h\k\n\f\k\a\a\y\n\3\5\e\k\4\d\p\0\o\z\o\r\a\w\y\r\p\x\8\y\s\s\h\t\o\6\9\5\8\8\1\l\y\w\k\8\t\0\r\p\4\j\9\3\9\o\6\b\6\r\o\9\o\5\3\u\l\1\l\1\t\h\g\d\i\a\5\1\k\v\a\6\y\n\k\5\e\s\k\g\t\b\h\h\t\j\g\p\g\w\3\b\i\n\t\r\8\7\d\t\l\x\m\3\z\p\v\i\3\r\u\a\b\7\u\m\i\4\m\m\k\f\h\f\o\3\y\9\t\s\8\y\1\a\8\8\s\8\t\q\q\r\z\v\b\y\4\n\x\d\s\9\m\6\9\t\w\2\p\l\e\g\k\c\j\j\1\4\z\g\6\h\g\c\z\m\j\5\t\0\x\j\1\3\8\g\1\u\m\0\8\o\z\1\5\o\5\o\y\j\n\3\w\7\c\k\d\v\j\s\6\i\7\p\h\w\3\f\t\1\d\k\4\p\q\s\m\2\a\w\x\9\q\4\5\8\3\e\9\h\e\p\w\6\p\d\h\v\b\1\s\e\h\0\w\w\t\o\4\b\d\n\4 ]] 00:25:38.960 08:22:12 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:25:38.960 08:22:12 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:25:39.220 [2024-04-17 08:22:12.293556] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:39.220 [2024-04-17 08:22:12.293641] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58579 ] 00:25:39.220 [2024-04-17 08:22:12.431113] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:39.220 [2024-04-17 08:22:12.533863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:39.738  Copying: 512/512 [B] (average 125 kBps) 00:25:39.738 00:25:39.738 08:22:12 -- dd/posix.sh@93 -- # [[ rh5d6ah0hcrheeteialqkp60kxi1ghzp6b0wrlro0xxic3mnmyyze2h4msw2ubvpexa99nq2rm485434ikxtcyjf0i5zd01ydwhtvxqqk0tyy82w276samzc2udw43yc70047udkgau4j44nzx5127du2w2jz61fkhccj7z5h7pzriiht6pcjxndxkcj3c5pstholkb4ui1w5x11uayz84iecrrdevhw7dagca0f7yfw7q1hhknfkaayn35ek4dp0ozorawyrpx8ysshto695881lywk8t0rp4j939o6b6ro9o53ul1l1thgdia51kva6ynk5eskgtbhhtjgpgw3bintr87dtlxm3zpvi3ruab7umi4mmkfhfo3y9ts8y1a88s8tqqrzvby4nxds9m69tw2plegkcjj14zg6hgczmj5t0xj138g1um08oz15o5oyjn3w7ckdvjs6i7phw3ft1dk4pqsm2awx9q4583e9hepw6pdhvb1seh0wwto4bdn4 == \r\h\5\d\6\a\h\0\h\c\r\h\e\e\t\e\i\a\l\q\k\p\6\0\k\x\i\1\g\h\z\p\6\b\0\w\r\l\r\o\0\x\x\i\c\3\m\n\m\y\y\z\e\2\h\4\m\s\w\2\u\b\v\p\e\x\a\9\9\n\q\2\r\m\4\8\5\4\3\4\i\k\x\t\c\y\j\f\0\i\5\z\d\0\1\y\d\w\h\t\v\x\q\q\k\0\t\y\y\8\2\w\2\7\6\s\a\m\z\c\2\u\d\w\4\3\y\c\7\0\0\4\7\u\d\k\g\a\u\4\j\4\4\n\z\x\5\1\2\7\d\u\2\w\2\j\z\6\1\f\k\h\c\c\j\7\z\5\h\7\p\z\r\i\i\h\t\6\p\c\j\x\n\d\x\k\c\j\3\c\5\p\s\t\h\o\l\k\b\4\u\i\1\w\5\x\1\1\u\a\y\z\8\4\i\e\c\r\r\d\e\v\h\w\7\d\a\g\c\a\0\f\7\y\f\w\7\q\1\h\h\k\n\f\k\a\a\y\n\3\5\e\k\4\d\p\0\o\z\o\r\a\w\y\r\p\x\8\y\s\s\h\t\o\6\9\5\8\8\1\l\y\w\k\8\t\0\r\p\4\j\9\3\9\o\6\b\6\r\o\9\o\5\3\u\l\1\l\1\t\h\g\d\i\a\5\1\k\v\a\6\y\n\k\5\e\s\k\g\t\b\h\h\t\j\g\p\g\w\3\b\i\n\t\r\8\7\d\t\l\x\m\3\z\p\v\i\3\r\u\a\b\7\u\m\i\4\m\m\k\f\h\f\o\3\y\9\t\s\8\y\1\a\8\8\s\8\t\q\q\r\z\v\b\y\4\n\x\d\s\9\m\6\9\t\w\2\p\l\e\g\k\c\j\j\1\4\z\g\6\h\g\c\z\m\j\5\t\0\x\j\1\3\8\g\1\u\m\0\8\o\z\1\5\o\5\o\y\j\n\3\w\7\c\k\d\v\j\s\6\i\7\p\h\w\3\f\t\1\d\k\4\p\q\s\m\2\a\w\x\9\q\4\5\8\3\e\9\h\e\p\w\6\p\d\h\v\b\1\s\e\h\0\w\w\t\o\4\b\d\n\4 ]] 00:25:39.738 08:22:12 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:25:39.738 08:22:12 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:25:39.738 [2024-04-17 08:22:12.870610] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:39.738 [2024-04-17 08:22:12.870686] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58582 ] 00:25:39.738 [2024-04-17 08:22:13.010022] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:39.996 [2024-04-17 08:22:13.111039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:40.254  Copying: 512/512 [B] (average 166 kBps) 00:25:40.254 00:25:40.254 ************************************ 00:25:40.254 END TEST dd_flags_misc 00:25:40.254 ************************************ 00:25:40.254 08:22:13 -- dd/posix.sh@93 -- # [[ rh5d6ah0hcrheeteialqkp60kxi1ghzp6b0wrlro0xxic3mnmyyze2h4msw2ubvpexa99nq2rm485434ikxtcyjf0i5zd01ydwhtvxqqk0tyy82w276samzc2udw43yc70047udkgau4j44nzx5127du2w2jz61fkhccj7z5h7pzriiht6pcjxndxkcj3c5pstholkb4ui1w5x11uayz84iecrrdevhw7dagca0f7yfw7q1hhknfkaayn35ek4dp0ozorawyrpx8ysshto695881lywk8t0rp4j939o6b6ro9o53ul1l1thgdia51kva6ynk5eskgtbhhtjgpgw3bintr87dtlxm3zpvi3ruab7umi4mmkfhfo3y9ts8y1a88s8tqqrzvby4nxds9m69tw2plegkcjj14zg6hgczmj5t0xj138g1um08oz15o5oyjn3w7ckdvjs6i7phw3ft1dk4pqsm2awx9q4583e9hepw6pdhvb1seh0wwto4bdn4 == \r\h\5\d\6\a\h\0\h\c\r\h\e\e\t\e\i\a\l\q\k\p\6\0\k\x\i\1\g\h\z\p\6\b\0\w\r\l\r\o\0\x\x\i\c\3\m\n\m\y\y\z\e\2\h\4\m\s\w\2\u\b\v\p\e\x\a\9\9\n\q\2\r\m\4\8\5\4\3\4\i\k\x\t\c\y\j\f\0\i\5\z\d\0\1\y\d\w\h\t\v\x\q\q\k\0\t\y\y\8\2\w\2\7\6\s\a\m\z\c\2\u\d\w\4\3\y\c\7\0\0\4\7\u\d\k\g\a\u\4\j\4\4\n\z\x\5\1\2\7\d\u\2\w\2\j\z\6\1\f\k\h\c\c\j\7\z\5\h\7\p\z\r\i\i\h\t\6\p\c\j\x\n\d\x\k\c\j\3\c\5\p\s\t\h\o\l\k\b\4\u\i\1\w\5\x\1\1\u\a\y\z\8\4\i\e\c\r\r\d\e\v\h\w\7\d\a\g\c\a\0\f\7\y\f\w\7\q\1\h\h\k\n\f\k\a\a\y\n\3\5\e\k\4\d\p\0\o\z\o\r\a\w\y\r\p\x\8\y\s\s\h\t\o\6\9\5\8\8\1\l\y\w\k\8\t\0\r\p\4\j\9\3\9\o\6\b\6\r\o\9\o\5\3\u\l\1\l\1\t\h\g\d\i\a\5\1\k\v\a\6\y\n\k\5\e\s\k\g\t\b\h\h\t\j\g\p\g\w\3\b\i\n\t\r\8\7\d\t\l\x\m\3\z\p\v\i\3\r\u\a\b\7\u\m\i\4\m\m\k\f\h\f\o\3\y\9\t\s\8\y\1\a\8\8\s\8\t\q\q\r\z\v\b\y\4\n\x\d\s\9\m\6\9\t\w\2\p\l\e\g\k\c\j\j\1\4\z\g\6\h\g\c\z\m\j\5\t\0\x\j\1\3\8\g\1\u\m\0\8\o\z\1\5\o\5\o\y\j\n\3\w\7\c\k\d\v\j\s\6\i\7\p\h\w\3\f\t\1\d\k\4\p\q\s\m\2\a\w\x\9\q\4\5\8\3\e\9\h\e\p\w\6\p\d\h\v\b\1\s\e\h\0\w\w\t\o\4\b\d\n\4 ]] 00:25:40.254 00:25:40.254 real 0m4.684s 00:25:40.254 user 0m2.792s 00:25:40.254 sys 0m0.907s 00:25:40.254 08:22:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:40.254 08:22:13 -- common/autotest_common.sh@10 -- # set +x 00:25:40.254 08:22:13 -- dd/posix.sh@131 -- # tests_forced_aio 00:25:40.254 08:22:13 -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:25:40.254 * Second test run, disabling liburing, forcing AIO 00:25:40.254 08:22:13 -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:25:40.254 08:22:13 -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:25:40.254 08:22:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:40.254 08:22:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:40.254 08:22:13 -- common/autotest_common.sh@10 -- # set +x 00:25:40.254 ************************************ 00:25:40.254 START TEST dd_flag_append_forced_aio 00:25:40.254 ************************************ 00:25:40.254 08:22:13 -- common/autotest_common.sh@1104 -- # append 00:25:40.254 08:22:13 -- dd/posix.sh@16 -- # local dump0 00:25:40.254 08:22:13 -- dd/posix.sh@17 -- # local dump1 00:25:40.254 08:22:13 -- dd/posix.sh@19 -- # gen_bytes 32 00:25:40.254 08:22:13 -- dd/common.sh@98 -- # xtrace_disable 00:25:40.254 08:22:13 -- common/autotest_common.sh@10 -- # set +x 00:25:40.254 08:22:13 -- dd/posix.sh@19 -- # dump0=5v3cvb5fvl3v0s89xiiif36q2k26c7sr 00:25:40.254 08:22:13 -- dd/posix.sh@20 -- # gen_bytes 32 00:25:40.254 08:22:13 -- dd/common.sh@98 -- # xtrace_disable 00:25:40.254 08:22:13 -- common/autotest_common.sh@10 -- # set +x 00:25:40.254 08:22:13 -- dd/posix.sh@20 -- # dump1=p10fqrcsm5dyioh70t4ni23x3l33339j 00:25:40.254 08:22:13 -- dd/posix.sh@22 -- # printf %s 5v3cvb5fvl3v0s89xiiif36q2k26c7sr 00:25:40.254 08:22:13 -- dd/posix.sh@23 -- # printf %s p10fqrcsm5dyioh70t4ni23x3l33339j 00:25:40.254 08:22:13 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:25:40.254 [2024-04-17 08:22:13.538500] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:40.254 [2024-04-17 08:22:13.538599] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58614 ] 00:25:40.513 [2024-04-17 08:22:13.682700] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:40.513 [2024-04-17 08:22:13.784193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:40.772  Copying: 32/32 [B] (average 31 kBps) 00:25:40.772 00:25:40.772 ************************************ 00:25:40.772 END TEST dd_flag_append_forced_aio 00:25:40.772 ************************************ 00:25:40.772 08:22:14 -- dd/posix.sh@27 -- # [[ p10fqrcsm5dyioh70t4ni23x3l33339j5v3cvb5fvl3v0s89xiiif36q2k26c7sr == \p\1\0\f\q\r\c\s\m\5\d\y\i\o\h\7\0\t\4\n\i\2\3\x\3\l\3\3\3\3\9\j\5\v\3\c\v\b\5\f\v\l\3\v\0\s\8\9\x\i\i\i\f\3\6\q\2\k\2\6\c\7\s\r ]] 00:25:40.772 00:25:40.772 real 0m0.600s 00:25:40.772 user 0m0.350s 00:25:40.772 sys 0m0.129s 00:25:40.772 08:22:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:40.772 08:22:14 -- common/autotest_common.sh@10 -- # set +x 00:25:41.029 08:22:14 -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:25:41.029 08:22:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:41.029 08:22:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:41.029 08:22:14 -- common/autotest_common.sh@10 -- # set +x 00:25:41.029 ************************************ 00:25:41.029 START TEST dd_flag_directory_forced_aio 00:25:41.029 ************************************ 00:25:41.029 08:22:14 -- common/autotest_common.sh@1104 -- # directory 00:25:41.029 08:22:14 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:25:41.029 08:22:14 -- common/autotest_common.sh@640 -- # local es=0 00:25:41.029 08:22:14 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:25:41.029 08:22:14 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:41.029 08:22:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:41.029 08:22:14 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:41.029 08:22:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:41.029 08:22:14 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:41.029 08:22:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:41.029 08:22:14 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:41.029 08:22:14 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:25:41.029 08:22:14 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:25:41.029 [2024-04-17 08:22:14.195941] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:41.030 [2024-04-17 08:22:14.196017] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58641 ] 00:25:41.030 [2024-04-17 08:22:14.332453] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:41.288 [2024-04-17 08:22:14.434862] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:41.288 [2024-04-17 08:22:14.505840] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:25:41.288 [2024-04-17 08:22:14.505893] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:25:41.288 [2024-04-17 08:22:14.505902] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:41.288 [2024-04-17 08:22:14.601155] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:25:41.546 08:22:14 -- common/autotest_common.sh@643 -- # es=236 00:25:41.546 08:22:14 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:41.546 08:22:14 -- common/autotest_common.sh@652 -- # es=108 00:25:41.546 08:22:14 -- common/autotest_common.sh@653 -- # case "$es" in 00:25:41.546 08:22:14 -- common/autotest_common.sh@660 -- # es=1 00:25:41.546 08:22:14 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:41.546 08:22:14 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:25:41.546 08:22:14 -- common/autotest_common.sh@640 -- # local es=0 00:25:41.546 08:22:14 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:25:41.546 08:22:14 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:41.546 08:22:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:41.547 08:22:14 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:41.547 08:22:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:41.547 08:22:14 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:41.547 08:22:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:41.547 08:22:14 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:41.547 08:22:14 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:25:41.547 08:22:14 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:25:41.547 [2024-04-17 08:22:14.757073] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:41.547 [2024-04-17 08:22:14.757149] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58650 ] 00:25:41.806 [2024-04-17 08:22:14.896425] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:41.806 [2024-04-17 08:22:14.997792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:41.806 [2024-04-17 08:22:15.069872] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:25:41.806 [2024-04-17 08:22:15.069926] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:25:41.806 [2024-04-17 08:22:15.069935] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:42.065 [2024-04-17 08:22:15.167231] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:25:42.065 08:22:15 -- common/autotest_common.sh@643 -- # es=236 00:25:42.065 08:22:15 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:42.065 08:22:15 -- common/autotest_common.sh@652 -- # es=108 00:25:42.065 08:22:15 -- common/autotest_common.sh@653 -- # case "$es" in 00:25:42.065 08:22:15 -- common/autotest_common.sh@660 -- # es=1 00:25:42.065 ************************************ 00:25:42.065 END TEST dd_flag_directory_forced_aio 00:25:42.065 ************************************ 00:25:42.065 08:22:15 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:42.065 00:25:42.065 real 0m1.143s 00:25:42.065 user 0m0.679s 00:25:42.065 sys 0m0.255s 00:25:42.065 08:22:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:42.065 08:22:15 -- common/autotest_common.sh@10 -- # set +x 00:25:42.065 08:22:15 -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:25:42.065 08:22:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:42.065 08:22:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:42.065 08:22:15 -- common/autotest_common.sh@10 -- # set +x 00:25:42.065 ************************************ 00:25:42.065 START TEST dd_flag_nofollow_forced_aio 00:25:42.065 ************************************ 00:25:42.065 08:22:15 -- common/autotest_common.sh@1104 -- # nofollow 00:25:42.065 08:22:15 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:25:42.065 08:22:15 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:25:42.065 08:22:15 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:25:42.065 08:22:15 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:25:42.065 08:22:15 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:42.065 08:22:15 -- common/autotest_common.sh@640 -- # local es=0 00:25:42.065 08:22:15 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:42.065 08:22:15 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:42.065 08:22:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:42.065 08:22:15 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:42.065 08:22:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:42.065 08:22:15 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:42.065 08:22:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:42.065 08:22:15 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:42.065 08:22:15 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:25:42.066 08:22:15 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:42.324 [2024-04-17 08:22:15.403584] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:42.324 [2024-04-17 08:22:15.403644] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58679 ] 00:25:42.324 [2024-04-17 08:22:15.542937] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:42.324 [2024-04-17 08:22:15.645064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:42.583 [2024-04-17 08:22:15.718969] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:25:42.583 [2024-04-17 08:22:15.719020] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:25:42.583 [2024-04-17 08:22:15.719029] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:42.583 [2024-04-17 08:22:15.812830] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:25:42.842 08:22:15 -- common/autotest_common.sh@643 -- # es=216 00:25:42.842 08:22:15 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:42.842 08:22:15 -- common/autotest_common.sh@652 -- # es=88 00:25:42.842 08:22:15 -- common/autotest_common.sh@653 -- # case "$es" in 00:25:42.842 08:22:15 -- common/autotest_common.sh@660 -- # es=1 00:25:42.842 08:22:15 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:42.842 08:22:15 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:25:42.842 08:22:15 -- common/autotest_common.sh@640 -- # local es=0 00:25:42.842 08:22:15 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:25:42.842 08:22:15 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:42.842 08:22:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:42.842 08:22:15 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:42.842 08:22:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:42.842 08:22:15 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:42.842 08:22:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:42.842 08:22:15 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:42.842 08:22:15 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:25:42.842 08:22:15 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:25:42.842 [2024-04-17 08:22:15.982244] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:42.842 [2024-04-17 08:22:15.982449] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58688 ] 00:25:42.842 [2024-04-17 08:22:16.122223] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:43.102 [2024-04-17 08:22:16.227260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:43.102 [2024-04-17 08:22:16.299482] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:25:43.102 [2024-04-17 08:22:16.299637] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:25:43.102 [2024-04-17 08:22:16.299685] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:43.102 [2024-04-17 08:22:16.395262] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:25:43.361 08:22:16 -- common/autotest_common.sh@643 -- # es=216 00:25:43.361 08:22:16 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:43.361 08:22:16 -- common/autotest_common.sh@652 -- # es=88 00:25:43.361 08:22:16 -- common/autotest_common.sh@653 -- # case "$es" in 00:25:43.361 08:22:16 -- common/autotest_common.sh@660 -- # es=1 00:25:43.361 08:22:16 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:43.361 08:22:16 -- dd/posix.sh@46 -- # gen_bytes 512 00:25:43.361 08:22:16 -- dd/common.sh@98 -- # xtrace_disable 00:25:43.361 08:22:16 -- common/autotest_common.sh@10 -- # set +x 00:25:43.361 08:22:16 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:43.361 [2024-04-17 08:22:16.578551] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:43.361 [2024-04-17 08:22:16.578723] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58700 ] 00:25:43.619 [2024-04-17 08:22:16.719545] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:43.619 [2024-04-17 08:22:16.823661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:43.879  Copying: 512/512 [B] (average 500 kBps) 00:25:43.879 00:25:43.879 08:22:17 -- dd/posix.sh@49 -- # [[ 1ltkmv6cgbxeqwqle1s9absdudi7ni7b8z98g78ee2k9ngxpv9a2dnsmznh8xi3tt03nww96u93qtuefil9ib64t3uh6s1gdh1gypdt3ajd4ihus6u53onh8ja79j8yzw0ia29pvsc0i49zpbgapje3tosrcxer8xjpxou9weurol8tdww1paw7uem0lqjdyqqn6yd8k3jrnza3ssdwoz0paiop93ixafrj18zcxg2s2pjfdhz1rilkc3vxr8xlwg7aa9fmyl8d56rbtaku2igdb982gtrzrqhzhudelklsaqdwh9xcexyab5985jsa7n5gxx761u3piwxbql1lkagavqzsgidfpmbvd0fhgeiczde1nrfwp6sv4op3m1a3h2n2o22kmx2xs7krfor7n0k1ky939srpqc31r6mbdnv1pfzueixe9uk6uw3rzowwosze6y1symi04srn8zc7ypn60bppu4fnlcyye30cqmtrcpmuaz1spb42ah3i4u107 == \1\l\t\k\m\v\6\c\g\b\x\e\q\w\q\l\e\1\s\9\a\b\s\d\u\d\i\7\n\i\7\b\8\z\9\8\g\7\8\e\e\2\k\9\n\g\x\p\v\9\a\2\d\n\s\m\z\n\h\8\x\i\3\t\t\0\3\n\w\w\9\6\u\9\3\q\t\u\e\f\i\l\9\i\b\6\4\t\3\u\h\6\s\1\g\d\h\1\g\y\p\d\t\3\a\j\d\4\i\h\u\s\6\u\5\3\o\n\h\8\j\a\7\9\j\8\y\z\w\0\i\a\2\9\p\v\s\c\0\i\4\9\z\p\b\g\a\p\j\e\3\t\o\s\r\c\x\e\r\8\x\j\p\x\o\u\9\w\e\u\r\o\l\8\t\d\w\w\1\p\a\w\7\u\e\m\0\l\q\j\d\y\q\q\n\6\y\d\8\k\3\j\r\n\z\a\3\s\s\d\w\o\z\0\p\a\i\o\p\9\3\i\x\a\f\r\j\1\8\z\c\x\g\2\s\2\p\j\f\d\h\z\1\r\i\l\k\c\3\v\x\r\8\x\l\w\g\7\a\a\9\f\m\y\l\8\d\5\6\r\b\t\a\k\u\2\i\g\d\b\9\8\2\g\t\r\z\r\q\h\z\h\u\d\e\l\k\l\s\a\q\d\w\h\9\x\c\e\x\y\a\b\5\9\8\5\j\s\a\7\n\5\g\x\x\7\6\1\u\3\p\i\w\x\b\q\l\1\l\k\a\g\a\v\q\z\s\g\i\d\f\p\m\b\v\d\0\f\h\g\e\i\c\z\d\e\1\n\r\f\w\p\6\s\v\4\o\p\3\m\1\a\3\h\2\n\2\o\2\2\k\m\x\2\x\s\7\k\r\f\o\r\7\n\0\k\1\k\y\9\3\9\s\r\p\q\c\3\1\r\6\m\b\d\n\v\1\p\f\z\u\e\i\x\e\9\u\k\6\u\w\3\r\z\o\w\w\o\s\z\e\6\y\1\s\y\m\i\0\4\s\r\n\8\z\c\7\y\p\n\6\0\b\p\p\u\4\f\n\l\c\y\y\e\3\0\c\q\m\t\r\c\p\m\u\a\z\1\s\p\b\4\2\a\h\3\i\4\u\1\0\7 ]] 00:25:43.879 00:25:43.879 real 0m1.784s 00:25:43.879 user 0m1.072s 00:25:43.879 sys 0m0.379s 00:25:43.879 08:22:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:43.879 08:22:17 -- common/autotest_common.sh@10 -- # set +x 00:25:43.879 ************************************ 00:25:43.879 END TEST dd_flag_nofollow_forced_aio 00:25:43.879 ************************************ 00:25:43.879 08:22:17 -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:25:43.879 08:22:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:43.879 08:22:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:43.879 08:22:17 -- common/autotest_common.sh@10 -- # set +x 00:25:43.879 ************************************ 00:25:43.879 START TEST dd_flag_noatime_forced_aio 00:25:43.879 ************************************ 00:25:43.879 08:22:17 -- common/autotest_common.sh@1104 -- # noatime 00:25:43.879 08:22:17 -- dd/posix.sh@53 -- # local atime_if 00:25:43.879 08:22:17 -- dd/posix.sh@54 -- # local atime_of 00:25:43.879 08:22:17 -- dd/posix.sh@58 -- # gen_bytes 512 00:25:43.879 08:22:17 -- dd/common.sh@98 -- # xtrace_disable 00:25:43.879 08:22:17 -- common/autotest_common.sh@10 -- # set +x 00:25:43.879 08:22:17 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:25:44.168 08:22:17 -- dd/posix.sh@60 -- # atime_if=1713342136 00:25:44.168 08:22:17 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:44.168 08:22:17 -- dd/posix.sh@61 -- # atime_of=1713342137 00:25:44.168 08:22:17 -- dd/posix.sh@66 -- # sleep 1 00:25:45.112 08:22:18 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:45.112 [2024-04-17 08:22:18.272101] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:45.112 [2024-04-17 08:22:18.272258] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58742 ] 00:25:45.112 [2024-04-17 08:22:18.409928] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:45.369 [2024-04-17 08:22:18.512981] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:45.628  Copying: 512/512 [B] (average 500 kBps) 00:25:45.628 00:25:45.628 08:22:18 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:25:45.628 08:22:18 -- dd/posix.sh@69 -- # (( atime_if == 1713342136 )) 00:25:45.628 08:22:18 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:45.628 08:22:18 -- dd/posix.sh@70 -- # (( atime_of == 1713342137 )) 00:25:45.628 08:22:18 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:45.628 [2024-04-17 08:22:18.885616] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:45.628 [2024-04-17 08:22:18.885690] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58753 ] 00:25:45.887 [2024-04-17 08:22:19.021878] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:45.887 [2024-04-17 08:22:19.121747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:46.146  Copying: 512/512 [B] (average 500 kBps) 00:25:46.146 00:25:46.146 08:22:19 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:25:46.146 08:22:19 -- dd/posix.sh@73 -- # (( atime_if < 1713342139 )) 00:25:46.146 00:25:46.146 real 0m2.239s 00:25:46.146 user 0m0.723s 00:25:46.146 sys 0m0.268s 00:25:46.146 ************************************ 00:25:46.146 END TEST dd_flag_noatime_forced_aio 00:25:46.146 ************************************ 00:25:46.146 08:22:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:46.146 08:22:19 -- common/autotest_common.sh@10 -- # set +x 00:25:46.403 08:22:19 -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:25:46.403 08:22:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:46.403 08:22:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:46.403 08:22:19 -- common/autotest_common.sh@10 -- # set +x 00:25:46.403 ************************************ 00:25:46.403 START TEST dd_flags_misc_forced_aio 00:25:46.403 ************************************ 00:25:46.403 08:22:19 -- common/autotest_common.sh@1104 -- # io 00:25:46.403 08:22:19 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:25:46.403 08:22:19 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:25:46.403 08:22:19 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:25:46.403 08:22:19 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:25:46.403 08:22:19 -- dd/posix.sh@86 -- # gen_bytes 512 00:25:46.403 08:22:19 -- dd/common.sh@98 -- # xtrace_disable 00:25:46.403 08:22:19 -- common/autotest_common.sh@10 -- # set +x 00:25:46.403 08:22:19 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:25:46.403 08:22:19 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:25:46.403 [2024-04-17 08:22:19.559329] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:46.403 [2024-04-17 08:22:19.559413] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58780 ] 00:25:46.403 [2024-04-17 08:22:19.697993] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:46.662 [2024-04-17 08:22:19.801899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:46.921  Copying: 512/512 [B] (average 500 kBps) 00:25:46.921 00:25:46.921 08:22:20 -- dd/posix.sh@93 -- # [[ c63b4omaxpnb5wb5psj9a3ezb4324tn9sn85p47fmoxssq0zkfcp49b1uq5zpt29ink6w2becgifmio77p1bp8lhtinahkpk7csza8m9is4kavk6gqbxg70mby6k49pzr48326zzqjw7y96r6tmifs7j7i6bergk2dv4tvbxers3pbmlwiqtrnbfyw838be28bau6nrzoxhbl88q7tlp3yyonesr73bit1jtmxj29klwha0vedf0a0hvqxwyc38uqrp0508wm3zfeqdurck6dndsghmqn5038lztdw3z1epwhk6pnxbjewt1se5qwtw2me8jfh3xhs8qzlh6e80vi9i05nu9i2yhisk8d4k4a0cyxo2ijzxwwcgxu6g7nhzwlq0bxdz90a6sn199jbanjk19m1lsxs5q95n55j0gsxiqsev91yzwkwzd96ew3apdceyz1jrwovo06uwpjwz3qsx4quvu8tpx5fj9dxzixjaf8ic987r3wuntbc0lwkhb == \c\6\3\b\4\o\m\a\x\p\n\b\5\w\b\5\p\s\j\9\a\3\e\z\b\4\3\2\4\t\n\9\s\n\8\5\p\4\7\f\m\o\x\s\s\q\0\z\k\f\c\p\4\9\b\1\u\q\5\z\p\t\2\9\i\n\k\6\w\2\b\e\c\g\i\f\m\i\o\7\7\p\1\b\p\8\l\h\t\i\n\a\h\k\p\k\7\c\s\z\a\8\m\9\i\s\4\k\a\v\k\6\g\q\b\x\g\7\0\m\b\y\6\k\4\9\p\z\r\4\8\3\2\6\z\z\q\j\w\7\y\9\6\r\6\t\m\i\f\s\7\j\7\i\6\b\e\r\g\k\2\d\v\4\t\v\b\x\e\r\s\3\p\b\m\l\w\i\q\t\r\n\b\f\y\w\8\3\8\b\e\2\8\b\a\u\6\n\r\z\o\x\h\b\l\8\8\q\7\t\l\p\3\y\y\o\n\e\s\r\7\3\b\i\t\1\j\t\m\x\j\2\9\k\l\w\h\a\0\v\e\d\f\0\a\0\h\v\q\x\w\y\c\3\8\u\q\r\p\0\5\0\8\w\m\3\z\f\e\q\d\u\r\c\k\6\d\n\d\s\g\h\m\q\n\5\0\3\8\l\z\t\d\w\3\z\1\e\p\w\h\k\6\p\n\x\b\j\e\w\t\1\s\e\5\q\w\t\w\2\m\e\8\j\f\h\3\x\h\s\8\q\z\l\h\6\e\8\0\v\i\9\i\0\5\n\u\9\i\2\y\h\i\s\k\8\d\4\k\4\a\0\c\y\x\o\2\i\j\z\x\w\w\c\g\x\u\6\g\7\n\h\z\w\l\q\0\b\x\d\z\9\0\a\6\s\n\1\9\9\j\b\a\n\j\k\1\9\m\1\l\s\x\s\5\q\9\5\n\5\5\j\0\g\s\x\i\q\s\e\v\9\1\y\z\w\k\w\z\d\9\6\e\w\3\a\p\d\c\e\y\z\1\j\r\w\o\v\o\0\6\u\w\p\j\w\z\3\q\s\x\4\q\u\v\u\8\t\p\x\5\f\j\9\d\x\z\i\x\j\a\f\8\i\c\9\8\7\r\3\w\u\n\t\b\c\0\l\w\k\h\b ]] 00:25:46.921 08:22:20 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:25:46.921 08:22:20 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:25:46.921 [2024-04-17 08:22:20.145835] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:46.921 [2024-04-17 08:22:20.145914] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58787 ] 00:25:47.180 [2024-04-17 08:22:20.284353] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:47.180 [2024-04-17 08:22:20.374295] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:47.439  Copying: 512/512 [B] (average 500 kBps) 00:25:47.439 00:25:47.440 08:22:20 -- dd/posix.sh@93 -- # [[ c63b4omaxpnb5wb5psj9a3ezb4324tn9sn85p47fmoxssq0zkfcp49b1uq5zpt29ink6w2becgifmio77p1bp8lhtinahkpk7csza8m9is4kavk6gqbxg70mby6k49pzr48326zzqjw7y96r6tmifs7j7i6bergk2dv4tvbxers3pbmlwiqtrnbfyw838be28bau6nrzoxhbl88q7tlp3yyonesr73bit1jtmxj29klwha0vedf0a0hvqxwyc38uqrp0508wm3zfeqdurck6dndsghmqn5038lztdw3z1epwhk6pnxbjewt1se5qwtw2me8jfh3xhs8qzlh6e80vi9i05nu9i2yhisk8d4k4a0cyxo2ijzxwwcgxu6g7nhzwlq0bxdz90a6sn199jbanjk19m1lsxs5q95n55j0gsxiqsev91yzwkwzd96ew3apdceyz1jrwovo06uwpjwz3qsx4quvu8tpx5fj9dxzixjaf8ic987r3wuntbc0lwkhb == \c\6\3\b\4\o\m\a\x\p\n\b\5\w\b\5\p\s\j\9\a\3\e\z\b\4\3\2\4\t\n\9\s\n\8\5\p\4\7\f\m\o\x\s\s\q\0\z\k\f\c\p\4\9\b\1\u\q\5\z\p\t\2\9\i\n\k\6\w\2\b\e\c\g\i\f\m\i\o\7\7\p\1\b\p\8\l\h\t\i\n\a\h\k\p\k\7\c\s\z\a\8\m\9\i\s\4\k\a\v\k\6\g\q\b\x\g\7\0\m\b\y\6\k\4\9\p\z\r\4\8\3\2\6\z\z\q\j\w\7\y\9\6\r\6\t\m\i\f\s\7\j\7\i\6\b\e\r\g\k\2\d\v\4\t\v\b\x\e\r\s\3\p\b\m\l\w\i\q\t\r\n\b\f\y\w\8\3\8\b\e\2\8\b\a\u\6\n\r\z\o\x\h\b\l\8\8\q\7\t\l\p\3\y\y\o\n\e\s\r\7\3\b\i\t\1\j\t\m\x\j\2\9\k\l\w\h\a\0\v\e\d\f\0\a\0\h\v\q\x\w\y\c\3\8\u\q\r\p\0\5\0\8\w\m\3\z\f\e\q\d\u\r\c\k\6\d\n\d\s\g\h\m\q\n\5\0\3\8\l\z\t\d\w\3\z\1\e\p\w\h\k\6\p\n\x\b\j\e\w\t\1\s\e\5\q\w\t\w\2\m\e\8\j\f\h\3\x\h\s\8\q\z\l\h\6\e\8\0\v\i\9\i\0\5\n\u\9\i\2\y\h\i\s\k\8\d\4\k\4\a\0\c\y\x\o\2\i\j\z\x\w\w\c\g\x\u\6\g\7\n\h\z\w\l\q\0\b\x\d\z\9\0\a\6\s\n\1\9\9\j\b\a\n\j\k\1\9\m\1\l\s\x\s\5\q\9\5\n\5\5\j\0\g\s\x\i\q\s\e\v\9\1\y\z\w\k\w\z\d\9\6\e\w\3\a\p\d\c\e\y\z\1\j\r\w\o\v\o\0\6\u\w\p\j\w\z\3\q\s\x\4\q\u\v\u\8\t\p\x\5\f\j\9\d\x\z\i\x\j\a\f\8\i\c\9\8\7\r\3\w\u\n\t\b\c\0\l\w\k\h\b ]] 00:25:47.440 08:22:20 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:25:47.440 08:22:20 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:25:47.440 [2024-04-17 08:22:20.710741] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:47.440 [2024-04-17 08:22:20.710830] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58795 ] 00:25:47.699 [2024-04-17 08:22:20.854403] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:47.699 [2024-04-17 08:22:20.955899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:47.958  Copying: 512/512 [B] (average 250 kBps) 00:25:47.958 00:25:47.958 08:22:21 -- dd/posix.sh@93 -- # [[ c63b4omaxpnb5wb5psj9a3ezb4324tn9sn85p47fmoxssq0zkfcp49b1uq5zpt29ink6w2becgifmio77p1bp8lhtinahkpk7csza8m9is4kavk6gqbxg70mby6k49pzr48326zzqjw7y96r6tmifs7j7i6bergk2dv4tvbxers3pbmlwiqtrnbfyw838be28bau6nrzoxhbl88q7tlp3yyonesr73bit1jtmxj29klwha0vedf0a0hvqxwyc38uqrp0508wm3zfeqdurck6dndsghmqn5038lztdw3z1epwhk6pnxbjewt1se5qwtw2me8jfh3xhs8qzlh6e80vi9i05nu9i2yhisk8d4k4a0cyxo2ijzxwwcgxu6g7nhzwlq0bxdz90a6sn199jbanjk19m1lsxs5q95n55j0gsxiqsev91yzwkwzd96ew3apdceyz1jrwovo06uwpjwz3qsx4quvu8tpx5fj9dxzixjaf8ic987r3wuntbc0lwkhb == \c\6\3\b\4\o\m\a\x\p\n\b\5\w\b\5\p\s\j\9\a\3\e\z\b\4\3\2\4\t\n\9\s\n\8\5\p\4\7\f\m\o\x\s\s\q\0\z\k\f\c\p\4\9\b\1\u\q\5\z\p\t\2\9\i\n\k\6\w\2\b\e\c\g\i\f\m\i\o\7\7\p\1\b\p\8\l\h\t\i\n\a\h\k\p\k\7\c\s\z\a\8\m\9\i\s\4\k\a\v\k\6\g\q\b\x\g\7\0\m\b\y\6\k\4\9\p\z\r\4\8\3\2\6\z\z\q\j\w\7\y\9\6\r\6\t\m\i\f\s\7\j\7\i\6\b\e\r\g\k\2\d\v\4\t\v\b\x\e\r\s\3\p\b\m\l\w\i\q\t\r\n\b\f\y\w\8\3\8\b\e\2\8\b\a\u\6\n\r\z\o\x\h\b\l\8\8\q\7\t\l\p\3\y\y\o\n\e\s\r\7\3\b\i\t\1\j\t\m\x\j\2\9\k\l\w\h\a\0\v\e\d\f\0\a\0\h\v\q\x\w\y\c\3\8\u\q\r\p\0\5\0\8\w\m\3\z\f\e\q\d\u\r\c\k\6\d\n\d\s\g\h\m\q\n\5\0\3\8\l\z\t\d\w\3\z\1\e\p\w\h\k\6\p\n\x\b\j\e\w\t\1\s\e\5\q\w\t\w\2\m\e\8\j\f\h\3\x\h\s\8\q\z\l\h\6\e\8\0\v\i\9\i\0\5\n\u\9\i\2\y\h\i\s\k\8\d\4\k\4\a\0\c\y\x\o\2\i\j\z\x\w\w\c\g\x\u\6\g\7\n\h\z\w\l\q\0\b\x\d\z\9\0\a\6\s\n\1\9\9\j\b\a\n\j\k\1\9\m\1\l\s\x\s\5\q\9\5\n\5\5\j\0\g\s\x\i\q\s\e\v\9\1\y\z\w\k\w\z\d\9\6\e\w\3\a\p\d\c\e\y\z\1\j\r\w\o\v\o\0\6\u\w\p\j\w\z\3\q\s\x\4\q\u\v\u\8\t\p\x\5\f\j\9\d\x\z\i\x\j\a\f\8\i\c\9\8\7\r\3\w\u\n\t\b\c\0\l\w\k\h\b ]] 00:25:47.958 08:22:21 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:25:47.958 08:22:21 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:25:48.216 [2024-04-17 08:22:21.288869] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:48.216 [2024-04-17 08:22:21.288973] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58803 ] 00:25:48.216 [2024-04-17 08:22:21.431098] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:48.216 [2024-04-17 08:22:21.534842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:48.731  Copying: 512/512 [B] (average 166 kBps) 00:25:48.731 00:25:48.732 08:22:21 -- dd/posix.sh@93 -- # [[ c63b4omaxpnb5wb5psj9a3ezb4324tn9sn85p47fmoxssq0zkfcp49b1uq5zpt29ink6w2becgifmio77p1bp8lhtinahkpk7csza8m9is4kavk6gqbxg70mby6k49pzr48326zzqjw7y96r6tmifs7j7i6bergk2dv4tvbxers3pbmlwiqtrnbfyw838be28bau6nrzoxhbl88q7tlp3yyonesr73bit1jtmxj29klwha0vedf0a0hvqxwyc38uqrp0508wm3zfeqdurck6dndsghmqn5038lztdw3z1epwhk6pnxbjewt1se5qwtw2me8jfh3xhs8qzlh6e80vi9i05nu9i2yhisk8d4k4a0cyxo2ijzxwwcgxu6g7nhzwlq0bxdz90a6sn199jbanjk19m1lsxs5q95n55j0gsxiqsev91yzwkwzd96ew3apdceyz1jrwovo06uwpjwz3qsx4quvu8tpx5fj9dxzixjaf8ic987r3wuntbc0lwkhb == \c\6\3\b\4\o\m\a\x\p\n\b\5\w\b\5\p\s\j\9\a\3\e\z\b\4\3\2\4\t\n\9\s\n\8\5\p\4\7\f\m\o\x\s\s\q\0\z\k\f\c\p\4\9\b\1\u\q\5\z\p\t\2\9\i\n\k\6\w\2\b\e\c\g\i\f\m\i\o\7\7\p\1\b\p\8\l\h\t\i\n\a\h\k\p\k\7\c\s\z\a\8\m\9\i\s\4\k\a\v\k\6\g\q\b\x\g\7\0\m\b\y\6\k\4\9\p\z\r\4\8\3\2\6\z\z\q\j\w\7\y\9\6\r\6\t\m\i\f\s\7\j\7\i\6\b\e\r\g\k\2\d\v\4\t\v\b\x\e\r\s\3\p\b\m\l\w\i\q\t\r\n\b\f\y\w\8\3\8\b\e\2\8\b\a\u\6\n\r\z\o\x\h\b\l\8\8\q\7\t\l\p\3\y\y\o\n\e\s\r\7\3\b\i\t\1\j\t\m\x\j\2\9\k\l\w\h\a\0\v\e\d\f\0\a\0\h\v\q\x\w\y\c\3\8\u\q\r\p\0\5\0\8\w\m\3\z\f\e\q\d\u\r\c\k\6\d\n\d\s\g\h\m\q\n\5\0\3\8\l\z\t\d\w\3\z\1\e\p\w\h\k\6\p\n\x\b\j\e\w\t\1\s\e\5\q\w\t\w\2\m\e\8\j\f\h\3\x\h\s\8\q\z\l\h\6\e\8\0\v\i\9\i\0\5\n\u\9\i\2\y\h\i\s\k\8\d\4\k\4\a\0\c\y\x\o\2\i\j\z\x\w\w\c\g\x\u\6\g\7\n\h\z\w\l\q\0\b\x\d\z\9\0\a\6\s\n\1\9\9\j\b\a\n\j\k\1\9\m\1\l\s\x\s\5\q\9\5\n\5\5\j\0\g\s\x\i\q\s\e\v\9\1\y\z\w\k\w\z\d\9\6\e\w\3\a\p\d\c\e\y\z\1\j\r\w\o\v\o\0\6\u\w\p\j\w\z\3\q\s\x\4\q\u\v\u\8\t\p\x\5\f\j\9\d\x\z\i\x\j\a\f\8\i\c\9\8\7\r\3\w\u\n\t\b\c\0\l\w\k\h\b ]] 00:25:48.732 08:22:21 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:25:48.732 08:22:21 -- dd/posix.sh@86 -- # gen_bytes 512 00:25:48.732 08:22:21 -- dd/common.sh@98 -- # xtrace_disable 00:25:48.732 08:22:21 -- common/autotest_common.sh@10 -- # set +x 00:25:48.732 08:22:21 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:25:48.732 08:22:21 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:25:48.732 [2024-04-17 08:22:21.889883] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:48.732 [2024-04-17 08:22:21.890037] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58814 ] 00:25:48.732 [2024-04-17 08:22:22.030574] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:48.989 [2024-04-17 08:22:22.130833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:49.246  Copying: 512/512 [B] (average 500 kBps) 00:25:49.246 00:25:49.246 08:22:22 -- dd/posix.sh@93 -- # [[ cu62s3z6x1mc89e8ufbrhrl48qx4x27yne14mdtnn8gmx8ihfs48xynd7i5aio7gfmia22z6cu92tghk16lyn3mah6r8mcd6urnleoloydoxszc9xg31t2kptk1lmlfva84zjp0k75ex33g17phsu0otja8nhb9bd0jp7nr8oh3yxru1e7wzzs8fhvuqtdym4zy34bdjjceaz7233g4sqj15mxmkf36ofs9o8fpn80bwqrw9r3ms8a1ikrkvunvqxsupzvm1c1w0v7onkeyx5o5hf8le7d0jrc0piyhlherb5y6u84f0z9mqnb2zcjrohyr8qrxl0uqjycgloqnvzfzpvnym3wofd8fty6xol2cc66k7e3n49nuw3vjr3lz7f9is4a7ufgw8xshznqpzepjuq8r38lz8kx708ozgp5ju1wnan9je9vmxg6bgxbndwzasuyr06wx8np6n0g41p4sozhe9nryrit2qmz79idcj0tckzkttkvj1q2t3p4fv == \c\u\6\2\s\3\z\6\x\1\m\c\8\9\e\8\u\f\b\r\h\r\l\4\8\q\x\4\x\2\7\y\n\e\1\4\m\d\t\n\n\8\g\m\x\8\i\h\f\s\4\8\x\y\n\d\7\i\5\a\i\o\7\g\f\m\i\a\2\2\z\6\c\u\9\2\t\g\h\k\1\6\l\y\n\3\m\a\h\6\r\8\m\c\d\6\u\r\n\l\e\o\l\o\y\d\o\x\s\z\c\9\x\g\3\1\t\2\k\p\t\k\1\l\m\l\f\v\a\8\4\z\j\p\0\k\7\5\e\x\3\3\g\1\7\p\h\s\u\0\o\t\j\a\8\n\h\b\9\b\d\0\j\p\7\n\r\8\o\h\3\y\x\r\u\1\e\7\w\z\z\s\8\f\h\v\u\q\t\d\y\m\4\z\y\3\4\b\d\j\j\c\e\a\z\7\2\3\3\g\4\s\q\j\1\5\m\x\m\k\f\3\6\o\f\s\9\o\8\f\p\n\8\0\b\w\q\r\w\9\r\3\m\s\8\a\1\i\k\r\k\v\u\n\v\q\x\s\u\p\z\v\m\1\c\1\w\0\v\7\o\n\k\e\y\x\5\o\5\h\f\8\l\e\7\d\0\j\r\c\0\p\i\y\h\l\h\e\r\b\5\y\6\u\8\4\f\0\z\9\m\q\n\b\2\z\c\j\r\o\h\y\r\8\q\r\x\l\0\u\q\j\y\c\g\l\o\q\n\v\z\f\z\p\v\n\y\m\3\w\o\f\d\8\f\t\y\6\x\o\l\2\c\c\6\6\k\7\e\3\n\4\9\n\u\w\3\v\j\r\3\l\z\7\f\9\i\s\4\a\7\u\f\g\w\8\x\s\h\z\n\q\p\z\e\p\j\u\q\8\r\3\8\l\z\8\k\x\7\0\8\o\z\g\p\5\j\u\1\w\n\a\n\9\j\e\9\v\m\x\g\6\b\g\x\b\n\d\w\z\a\s\u\y\r\0\6\w\x\8\n\p\6\n\0\g\4\1\p\4\s\o\z\h\e\9\n\r\y\r\i\t\2\q\m\z\7\9\i\d\c\j\0\t\c\k\z\k\t\t\k\v\j\1\q\2\t\3\p\4\f\v ]] 00:25:49.246 08:22:22 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:25:49.246 08:22:22 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:25:49.246 [2024-04-17 08:22:22.475540] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:49.246 [2024-04-17 08:22:22.475610] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58823 ] 00:25:49.504 [2024-04-17 08:22:22.597817] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:49.504 [2024-04-17 08:22:22.698424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:49.762  Copying: 512/512 [B] (average 500 kBps) 00:25:49.762 00:25:49.762 08:22:22 -- dd/posix.sh@93 -- # [[ cu62s3z6x1mc89e8ufbrhrl48qx4x27yne14mdtnn8gmx8ihfs48xynd7i5aio7gfmia22z6cu92tghk16lyn3mah6r8mcd6urnleoloydoxszc9xg31t2kptk1lmlfva84zjp0k75ex33g17phsu0otja8nhb9bd0jp7nr8oh3yxru1e7wzzs8fhvuqtdym4zy34bdjjceaz7233g4sqj15mxmkf36ofs9o8fpn80bwqrw9r3ms8a1ikrkvunvqxsupzvm1c1w0v7onkeyx5o5hf8le7d0jrc0piyhlherb5y6u84f0z9mqnb2zcjrohyr8qrxl0uqjycgloqnvzfzpvnym3wofd8fty6xol2cc66k7e3n49nuw3vjr3lz7f9is4a7ufgw8xshznqpzepjuq8r38lz8kx708ozgp5ju1wnan9je9vmxg6bgxbndwzasuyr06wx8np6n0g41p4sozhe9nryrit2qmz79idcj0tckzkttkvj1q2t3p4fv == \c\u\6\2\s\3\z\6\x\1\m\c\8\9\e\8\u\f\b\r\h\r\l\4\8\q\x\4\x\2\7\y\n\e\1\4\m\d\t\n\n\8\g\m\x\8\i\h\f\s\4\8\x\y\n\d\7\i\5\a\i\o\7\g\f\m\i\a\2\2\z\6\c\u\9\2\t\g\h\k\1\6\l\y\n\3\m\a\h\6\r\8\m\c\d\6\u\r\n\l\e\o\l\o\y\d\o\x\s\z\c\9\x\g\3\1\t\2\k\p\t\k\1\l\m\l\f\v\a\8\4\z\j\p\0\k\7\5\e\x\3\3\g\1\7\p\h\s\u\0\o\t\j\a\8\n\h\b\9\b\d\0\j\p\7\n\r\8\o\h\3\y\x\r\u\1\e\7\w\z\z\s\8\f\h\v\u\q\t\d\y\m\4\z\y\3\4\b\d\j\j\c\e\a\z\7\2\3\3\g\4\s\q\j\1\5\m\x\m\k\f\3\6\o\f\s\9\o\8\f\p\n\8\0\b\w\q\r\w\9\r\3\m\s\8\a\1\i\k\r\k\v\u\n\v\q\x\s\u\p\z\v\m\1\c\1\w\0\v\7\o\n\k\e\y\x\5\o\5\h\f\8\l\e\7\d\0\j\r\c\0\p\i\y\h\l\h\e\r\b\5\y\6\u\8\4\f\0\z\9\m\q\n\b\2\z\c\j\r\o\h\y\r\8\q\r\x\l\0\u\q\j\y\c\g\l\o\q\n\v\z\f\z\p\v\n\y\m\3\w\o\f\d\8\f\t\y\6\x\o\l\2\c\c\6\6\k\7\e\3\n\4\9\n\u\w\3\v\j\r\3\l\z\7\f\9\i\s\4\a\7\u\f\g\w\8\x\s\h\z\n\q\p\z\e\p\j\u\q\8\r\3\8\l\z\8\k\x\7\0\8\o\z\g\p\5\j\u\1\w\n\a\n\9\j\e\9\v\m\x\g\6\b\g\x\b\n\d\w\z\a\s\u\y\r\0\6\w\x\8\n\p\6\n\0\g\4\1\p\4\s\o\z\h\e\9\n\r\y\r\i\t\2\q\m\z\7\9\i\d\c\j\0\t\c\k\z\k\t\t\k\v\j\1\q\2\t\3\p\4\f\v ]] 00:25:49.762 08:22:22 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:25:49.762 08:22:22 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:25:49.762 [2024-04-17 08:22:23.041796] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:49.762 [2024-04-17 08:22:23.041870] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58830 ] 00:25:50.019 [2024-04-17 08:22:23.180234] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:50.019 [2024-04-17 08:22:23.280917] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:50.277  Copying: 512/512 [B] (average 100 kBps) 00:25:50.277 00:25:50.277 08:22:23 -- dd/posix.sh@93 -- # [[ cu62s3z6x1mc89e8ufbrhrl48qx4x27yne14mdtnn8gmx8ihfs48xynd7i5aio7gfmia22z6cu92tghk16lyn3mah6r8mcd6urnleoloydoxszc9xg31t2kptk1lmlfva84zjp0k75ex33g17phsu0otja8nhb9bd0jp7nr8oh3yxru1e7wzzs8fhvuqtdym4zy34bdjjceaz7233g4sqj15mxmkf36ofs9o8fpn80bwqrw9r3ms8a1ikrkvunvqxsupzvm1c1w0v7onkeyx5o5hf8le7d0jrc0piyhlherb5y6u84f0z9mqnb2zcjrohyr8qrxl0uqjycgloqnvzfzpvnym3wofd8fty6xol2cc66k7e3n49nuw3vjr3lz7f9is4a7ufgw8xshznqpzepjuq8r38lz8kx708ozgp5ju1wnan9je9vmxg6bgxbndwzasuyr06wx8np6n0g41p4sozhe9nryrit2qmz79idcj0tckzkttkvj1q2t3p4fv == \c\u\6\2\s\3\z\6\x\1\m\c\8\9\e\8\u\f\b\r\h\r\l\4\8\q\x\4\x\2\7\y\n\e\1\4\m\d\t\n\n\8\g\m\x\8\i\h\f\s\4\8\x\y\n\d\7\i\5\a\i\o\7\g\f\m\i\a\2\2\z\6\c\u\9\2\t\g\h\k\1\6\l\y\n\3\m\a\h\6\r\8\m\c\d\6\u\r\n\l\e\o\l\o\y\d\o\x\s\z\c\9\x\g\3\1\t\2\k\p\t\k\1\l\m\l\f\v\a\8\4\z\j\p\0\k\7\5\e\x\3\3\g\1\7\p\h\s\u\0\o\t\j\a\8\n\h\b\9\b\d\0\j\p\7\n\r\8\o\h\3\y\x\r\u\1\e\7\w\z\z\s\8\f\h\v\u\q\t\d\y\m\4\z\y\3\4\b\d\j\j\c\e\a\z\7\2\3\3\g\4\s\q\j\1\5\m\x\m\k\f\3\6\o\f\s\9\o\8\f\p\n\8\0\b\w\q\r\w\9\r\3\m\s\8\a\1\i\k\r\k\v\u\n\v\q\x\s\u\p\z\v\m\1\c\1\w\0\v\7\o\n\k\e\y\x\5\o\5\h\f\8\l\e\7\d\0\j\r\c\0\p\i\y\h\l\h\e\r\b\5\y\6\u\8\4\f\0\z\9\m\q\n\b\2\z\c\j\r\o\h\y\r\8\q\r\x\l\0\u\q\j\y\c\g\l\o\q\n\v\z\f\z\p\v\n\y\m\3\w\o\f\d\8\f\t\y\6\x\o\l\2\c\c\6\6\k\7\e\3\n\4\9\n\u\w\3\v\j\r\3\l\z\7\f\9\i\s\4\a\7\u\f\g\w\8\x\s\h\z\n\q\p\z\e\p\j\u\q\8\r\3\8\l\z\8\k\x\7\0\8\o\z\g\p\5\j\u\1\w\n\a\n\9\j\e\9\v\m\x\g\6\b\g\x\b\n\d\w\z\a\s\u\y\r\0\6\w\x\8\n\p\6\n\0\g\4\1\p\4\s\o\z\h\e\9\n\r\y\r\i\t\2\q\m\z\7\9\i\d\c\j\0\t\c\k\z\k\t\t\k\v\j\1\q\2\t\3\p\4\f\v ]] 00:25:50.277 08:22:23 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:25:50.277 08:22:23 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:25:50.534 [2024-04-17 08:22:23.629696] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:50.534 [2024-04-17 08:22:23.629774] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58838 ] 00:25:50.534 [2024-04-17 08:22:23.770238] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:50.534 [2024-04-17 08:22:23.862242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:51.051  Copying: 512/512 [B] (average 166 kBps) 00:25:51.051 00:25:51.051 08:22:24 -- dd/posix.sh@93 -- # [[ cu62s3z6x1mc89e8ufbrhrl48qx4x27yne14mdtnn8gmx8ihfs48xynd7i5aio7gfmia22z6cu92tghk16lyn3mah6r8mcd6urnleoloydoxszc9xg31t2kptk1lmlfva84zjp0k75ex33g17phsu0otja8nhb9bd0jp7nr8oh3yxru1e7wzzs8fhvuqtdym4zy34bdjjceaz7233g4sqj15mxmkf36ofs9o8fpn80bwqrw9r3ms8a1ikrkvunvqxsupzvm1c1w0v7onkeyx5o5hf8le7d0jrc0piyhlherb5y6u84f0z9mqnb2zcjrohyr8qrxl0uqjycgloqnvzfzpvnym3wofd8fty6xol2cc66k7e3n49nuw3vjr3lz7f9is4a7ufgw8xshznqpzepjuq8r38lz8kx708ozgp5ju1wnan9je9vmxg6bgxbndwzasuyr06wx8np6n0g41p4sozhe9nryrit2qmz79idcj0tckzkttkvj1q2t3p4fv == \c\u\6\2\s\3\z\6\x\1\m\c\8\9\e\8\u\f\b\r\h\r\l\4\8\q\x\4\x\2\7\y\n\e\1\4\m\d\t\n\n\8\g\m\x\8\i\h\f\s\4\8\x\y\n\d\7\i\5\a\i\o\7\g\f\m\i\a\2\2\z\6\c\u\9\2\t\g\h\k\1\6\l\y\n\3\m\a\h\6\r\8\m\c\d\6\u\r\n\l\e\o\l\o\y\d\o\x\s\z\c\9\x\g\3\1\t\2\k\p\t\k\1\l\m\l\f\v\a\8\4\z\j\p\0\k\7\5\e\x\3\3\g\1\7\p\h\s\u\0\o\t\j\a\8\n\h\b\9\b\d\0\j\p\7\n\r\8\o\h\3\y\x\r\u\1\e\7\w\z\z\s\8\f\h\v\u\q\t\d\y\m\4\z\y\3\4\b\d\j\j\c\e\a\z\7\2\3\3\g\4\s\q\j\1\5\m\x\m\k\f\3\6\o\f\s\9\o\8\f\p\n\8\0\b\w\q\r\w\9\r\3\m\s\8\a\1\i\k\r\k\v\u\n\v\q\x\s\u\p\z\v\m\1\c\1\w\0\v\7\o\n\k\e\y\x\5\o\5\h\f\8\l\e\7\d\0\j\r\c\0\p\i\y\h\l\h\e\r\b\5\y\6\u\8\4\f\0\z\9\m\q\n\b\2\z\c\j\r\o\h\y\r\8\q\r\x\l\0\u\q\j\y\c\g\l\o\q\n\v\z\f\z\p\v\n\y\m\3\w\o\f\d\8\f\t\y\6\x\o\l\2\c\c\6\6\k\7\e\3\n\4\9\n\u\w\3\v\j\r\3\l\z\7\f\9\i\s\4\a\7\u\f\g\w\8\x\s\h\z\n\q\p\z\e\p\j\u\q\8\r\3\8\l\z\8\k\x\7\0\8\o\z\g\p\5\j\u\1\w\n\a\n\9\j\e\9\v\m\x\g\6\b\g\x\b\n\d\w\z\a\s\u\y\r\0\6\w\x\8\n\p\6\n\0\g\4\1\p\4\s\o\z\h\e\9\n\r\y\r\i\t\2\q\m\z\7\9\i\d\c\j\0\t\c\k\z\k\t\t\k\v\j\1\q\2\t\3\p\4\f\v ]] 00:25:51.051 00:25:51.051 real 0m4.661s 00:25:51.051 user 0m2.754s 00:25:51.051 sys 0m0.928s 00:25:51.051 08:22:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:51.051 ************************************ 00:25:51.051 END TEST dd_flags_misc_forced_aio 00:25:51.051 ************************************ 00:25:51.051 08:22:24 -- common/autotest_common.sh@10 -- # set +x 00:25:51.051 08:22:24 -- dd/posix.sh@1 -- # cleanup 00:25:51.051 08:22:24 -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:25:51.051 08:22:24 -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:25:51.051 ************************************ 00:25:51.051 END TEST spdk_dd_posix 00:25:51.051 ************************************ 00:25:51.051 00:25:51.051 real 0m21.619s 00:25:51.051 user 0m11.428s 00:25:51.051 sys 0m4.402s 00:25:51.051 08:22:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:51.051 08:22:24 -- common/autotest_common.sh@10 -- # set +x 00:25:51.051 08:22:24 -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:25:51.051 08:22:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:51.051 08:22:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:51.051 08:22:24 -- common/autotest_common.sh@10 -- # set +x 00:25:51.051 ************************************ 00:25:51.051 START TEST spdk_dd_malloc 00:25:51.051 ************************************ 00:25:51.051 08:22:24 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:25:51.051 * Looking for test storage... 00:25:51.051 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:25:51.051 08:22:24 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:51.051 08:22:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:51.051 08:22:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:51.051 08:22:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:51.051 08:22:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.051 08:22:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.051 08:22:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.051 08:22:24 -- paths/export.sh@5 -- # export PATH 00:25:51.051 08:22:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.051 08:22:24 -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:25:51.051 08:22:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:51.051 08:22:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:51.051 08:22:24 -- common/autotest_common.sh@10 -- # set +x 00:25:51.309 ************************************ 00:25:51.309 START TEST dd_malloc_copy 00:25:51.309 ************************************ 00:25:51.309 08:22:24 -- common/autotest_common.sh@1104 -- # malloc_copy 00:25:51.309 08:22:24 -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:25:51.309 08:22:24 -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:25:51.309 08:22:24 -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:25:51.309 08:22:24 -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:25:51.309 08:22:24 -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:25:51.309 08:22:24 -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:25:51.309 08:22:24 -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:25:51.309 08:22:24 -- dd/malloc.sh@28 -- # gen_conf 00:25:51.309 08:22:24 -- dd/common.sh@31 -- # xtrace_disable 00:25:51.309 08:22:24 -- common/autotest_common.sh@10 -- # set +x 00:25:51.309 [2024-04-17 08:22:24.435130] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:51.309 [2024-04-17 08:22:24.435286] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58911 ] 00:25:51.309 { 00:25:51.309 "subsystems": [ 00:25:51.309 { 00:25:51.309 "subsystem": "bdev", 00:25:51.309 "config": [ 00:25:51.309 { 00:25:51.309 "params": { 00:25:51.309 "block_size": 512, 00:25:51.309 "num_blocks": 1048576, 00:25:51.309 "name": "malloc0" 00:25:51.309 }, 00:25:51.309 "method": "bdev_malloc_create" 00:25:51.309 }, 00:25:51.309 { 00:25:51.309 "params": { 00:25:51.309 "block_size": 512, 00:25:51.309 "num_blocks": 1048576, 00:25:51.309 "name": "malloc1" 00:25:51.309 }, 00:25:51.309 "method": "bdev_malloc_create" 00:25:51.309 }, 00:25:51.309 { 00:25:51.309 "method": "bdev_wait_for_examine" 00:25:51.309 } 00:25:51.309 ] 00:25:51.309 } 00:25:51.309 ] 00:25:51.309 } 00:25:51.309 [2024-04-17 08:22:24.576185] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:51.565 [2024-04-17 08:22:24.678824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:54.697  Copying: 224/512 [MB] (224 MBps) Copying: 442/512 [MB] (217 MBps) Copying: 512/512 [MB] (average 222 MBps) 00:25:54.697 00:25:54.698 08:22:27 -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:25:54.698 08:22:27 -- dd/malloc.sh@33 -- # gen_conf 00:25:54.698 08:22:27 -- dd/common.sh@31 -- # xtrace_disable 00:25:54.698 08:22:27 -- common/autotest_common.sh@10 -- # set +x 00:25:54.698 [2024-04-17 08:22:27.894594] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:54.698 [2024-04-17 08:22:27.894665] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58953 ] 00:25:54.698 { 00:25:54.698 "subsystems": [ 00:25:54.698 { 00:25:54.698 "subsystem": "bdev", 00:25:54.698 "config": [ 00:25:54.698 { 00:25:54.698 "params": { 00:25:54.698 "block_size": 512, 00:25:54.698 "num_blocks": 1048576, 00:25:54.698 "name": "malloc0" 00:25:54.698 }, 00:25:54.698 "method": "bdev_malloc_create" 00:25:54.698 }, 00:25:54.698 { 00:25:54.698 "params": { 00:25:54.698 "block_size": 512, 00:25:54.698 "num_blocks": 1048576, 00:25:54.698 "name": "malloc1" 00:25:54.698 }, 00:25:54.698 "method": "bdev_malloc_create" 00:25:54.698 }, 00:25:54.698 { 00:25:54.698 "method": "bdev_wait_for_examine" 00:25:54.698 } 00:25:54.698 ] 00:25:54.698 } 00:25:54.698 ] 00:25:54.698 } 00:25:54.955 [2024-04-17 08:22:28.033756] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:54.955 [2024-04-17 08:22:28.136024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:58.100  Copying: 221/512 [MB] (221 MBps) Copying: 445/512 [MB] (224 MBps) Copying: 512/512 [MB] (average 223 MBps) 00:25:58.100 00:25:58.100 00:25:58.100 real 0m6.858s 00:25:58.100 user 0m5.991s 00:25:58.100 sys 0m0.713s 00:25:58.100 08:22:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:58.100 ************************************ 00:25:58.100 END TEST dd_malloc_copy 00:25:58.100 ************************************ 00:25:58.100 08:22:31 -- common/autotest_common.sh@10 -- # set +x 00:25:58.100 ************************************ 00:25:58.100 END TEST spdk_dd_malloc 00:25:58.100 ************************************ 00:25:58.100 00:25:58.100 real 0m7.030s 00:25:58.100 user 0m6.062s 00:25:58.100 sys 0m0.823s 00:25:58.100 08:22:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:58.100 08:22:31 -- common/autotest_common.sh@10 -- # set +x 00:25:58.100 08:22:31 -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 0000:00:07.0 00:25:58.100 08:22:31 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:25:58.100 08:22:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:58.100 08:22:31 -- common/autotest_common.sh@10 -- # set +x 00:25:58.100 ************************************ 00:25:58.100 START TEST spdk_dd_bdev_to_bdev 00:25:58.100 ************************************ 00:25:58.100 08:22:31 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 0000:00:07.0 00:25:58.360 * Looking for test storage... 00:25:58.360 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:25:58.360 08:22:31 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:58.360 08:22:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:58.360 08:22:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:58.360 08:22:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:58.360 08:22:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:58.360 08:22:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:58.360 08:22:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:58.360 08:22:31 -- paths/export.sh@5 -- # export PATH 00:25:58.360 08:22:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:58.360 08:22:31 -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:25:58.360 08:22:31 -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:25:58.360 08:22:31 -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:25:58.360 08:22:31 -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:25:58.360 08:22:31 -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:25:58.360 08:22:31 -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:25:58.360 08:22:31 -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:06.0 00:25:58.360 08:22:31 -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:25:58.360 08:22:31 -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:25:58.360 08:22:31 -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:07.0 00:25:58.360 08:22:31 -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:06.0' ['trtype']='pcie') 00:25:58.360 08:22:31 -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:25:58.360 08:22:31 -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:07.0' ['trtype']='pcie') 00:25:58.360 08:22:31 -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:25:58.360 08:22:31 -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:25:58.360 08:22:31 -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:58.360 08:22:31 -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:25:58.360 08:22:31 -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:25:58.360 08:22:31 -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:25:58.360 08:22:31 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:25:58.360 08:22:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:58.360 08:22:31 -- common/autotest_common.sh@10 -- # set +x 00:25:58.360 ************************************ 00:25:58.360 START TEST dd_inflate_file 00:25:58.360 ************************************ 00:25:58.360 08:22:31 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:25:58.360 [2024-04-17 08:22:31.550547] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:58.360 [2024-04-17 08:22:31.550713] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59060 ] 00:25:58.360 [2024-04-17 08:22:31.679835] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:58.619 [2024-04-17 08:22:31.790528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:58.878  Copying: 64/64 [MB] (average 1828 MBps) 00:25:58.878 00:25:58.878 00:25:58.878 real 0m0.636s 00:25:58.878 user 0m0.348s 00:25:58.878 sys 0m0.167s 00:25:58.878 08:22:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:58.878 08:22:32 -- common/autotest_common.sh@10 -- # set +x 00:25:58.878 ************************************ 00:25:58.878 END TEST dd_inflate_file 00:25:58.878 ************************************ 00:25:58.878 08:22:32 -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:25:58.878 08:22:32 -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:25:58.878 08:22:32 -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:25:58.878 08:22:32 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:25:58.878 08:22:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:58.879 08:22:32 -- common/autotest_common.sh@10 -- # set +x 00:25:58.879 08:22:32 -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:25:58.879 08:22:32 -- dd/common.sh@31 -- # xtrace_disable 00:25:58.879 08:22:32 -- common/autotest_common.sh@10 -- # set +x 00:25:58.879 ************************************ 00:25:58.879 START TEST dd_copy_to_out_bdev 00:25:58.879 ************************************ 00:25:58.879 08:22:32 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:25:59.138 [2024-04-17 08:22:32.251488] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:59.138 [2024-04-17 08:22:32.251553] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59086 ] 00:25:59.138 { 00:25:59.138 "subsystems": [ 00:25:59.138 { 00:25:59.138 "subsystem": "bdev", 00:25:59.138 "config": [ 00:25:59.138 { 00:25:59.138 "params": { 00:25:59.138 "trtype": "pcie", 00:25:59.138 "traddr": "0000:00:06.0", 00:25:59.138 "name": "Nvme0" 00:25:59.138 }, 00:25:59.138 "method": "bdev_nvme_attach_controller" 00:25:59.138 }, 00:25:59.138 { 00:25:59.138 "params": { 00:25:59.138 "trtype": "pcie", 00:25:59.138 "traddr": "0000:00:07.0", 00:25:59.138 "name": "Nvme1" 00:25:59.138 }, 00:25:59.138 "method": "bdev_nvme_attach_controller" 00:25:59.138 }, 00:25:59.138 { 00:25:59.138 "method": "bdev_wait_for_examine" 00:25:59.138 } 00:25:59.138 ] 00:25:59.138 } 00:25:59.138 ] 00:25:59.138 } 00:25:59.138 [2024-04-17 08:22:32.390769] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:59.398 [2024-04-17 08:22:32.492100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:00.592  Copying: 64/64 [MB] (average 72 MBps) 00:26:00.593 00:26:00.851 ************************************ 00:26:00.852 END TEST dd_copy_to_out_bdev 00:26:00.852 ************************************ 00:26:00.852 00:26:00.852 real 0m1.720s 00:26:00.852 user 0m1.455s 00:26:00.852 sys 0m0.208s 00:26:00.852 08:22:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:00.852 08:22:33 -- common/autotest_common.sh@10 -- # set +x 00:26:00.852 08:22:33 -- dd/bdev_to_bdev.sh@113 -- # count=65 00:26:00.852 08:22:33 -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:26:00.852 08:22:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:00.852 08:22:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:00.852 08:22:33 -- common/autotest_common.sh@10 -- # set +x 00:26:00.852 ************************************ 00:26:00.852 START TEST dd_offset_magic 00:26:00.852 ************************************ 00:26:00.852 08:22:33 -- common/autotest_common.sh@1104 -- # offset_magic 00:26:00.852 08:22:33 -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:26:00.852 08:22:33 -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:26:00.852 08:22:33 -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:26:00.852 08:22:33 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:26:00.852 08:22:33 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:26:00.852 08:22:33 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:26:00.852 08:22:33 -- dd/common.sh@31 -- # xtrace_disable 00:26:00.852 08:22:33 -- common/autotest_common.sh@10 -- # set +x 00:26:00.852 [2024-04-17 08:22:34.045847] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:00.852 [2024-04-17 08:22:34.046007] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59132 ] 00:26:00.852 { 00:26:00.852 "subsystems": [ 00:26:00.852 { 00:26:00.852 "subsystem": "bdev", 00:26:00.852 "config": [ 00:26:00.852 { 00:26:00.852 "params": { 00:26:00.852 "trtype": "pcie", 00:26:00.852 "traddr": "0000:00:06.0", 00:26:00.852 "name": "Nvme0" 00:26:00.852 }, 00:26:00.852 "method": "bdev_nvme_attach_controller" 00:26:00.852 }, 00:26:00.852 { 00:26:00.852 "params": { 00:26:00.852 "trtype": "pcie", 00:26:00.852 "traddr": "0000:00:07.0", 00:26:00.852 "name": "Nvme1" 00:26:00.852 }, 00:26:00.852 "method": "bdev_nvme_attach_controller" 00:26:00.852 }, 00:26:00.852 { 00:26:00.852 "method": "bdev_wait_for_examine" 00:26:00.852 } 00:26:00.852 ] 00:26:00.852 } 00:26:00.852 ] 00:26:00.852 } 00:26:01.120 [2024-04-17 08:22:34.183688] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:01.120 [2024-04-17 08:22:34.287867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:01.644  Copying: 65/65 [MB] (average 833 MBps) 00:26:01.644 00:26:01.644 08:22:34 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:26:01.644 08:22:34 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:26:01.644 08:22:34 -- dd/common.sh@31 -- # xtrace_disable 00:26:01.644 08:22:34 -- common/autotest_common.sh@10 -- # set +x 00:26:01.903 [2024-04-17 08:22:34.985697] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:01.903 [2024-04-17 08:22:34.985856] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59153 ] 00:26:01.903 { 00:26:01.903 "subsystems": [ 00:26:01.903 { 00:26:01.903 "subsystem": "bdev", 00:26:01.903 "config": [ 00:26:01.903 { 00:26:01.903 "params": { 00:26:01.903 "trtype": "pcie", 00:26:01.903 "traddr": "0000:00:06.0", 00:26:01.903 "name": "Nvme0" 00:26:01.903 }, 00:26:01.903 "method": "bdev_nvme_attach_controller" 00:26:01.903 }, 00:26:01.903 { 00:26:01.903 "params": { 00:26:01.903 "trtype": "pcie", 00:26:01.903 "traddr": "0000:00:07.0", 00:26:01.903 "name": "Nvme1" 00:26:01.903 }, 00:26:01.903 "method": "bdev_nvme_attach_controller" 00:26:01.903 }, 00:26:01.903 { 00:26:01.903 "method": "bdev_wait_for_examine" 00:26:01.903 } 00:26:01.903 ] 00:26:01.903 } 00:26:01.903 ] 00:26:01.903 } 00:26:01.903 [2024-04-17 08:22:35.123767] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:01.903 [2024-04-17 08:22:35.225370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:02.420  Copying: 1024/1024 [kB] (average 1000 MBps) 00:26:02.420 00:26:02.420 08:22:35 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:26:02.420 08:22:35 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:26:02.420 08:22:35 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:26:02.420 08:22:35 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:26:02.420 08:22:35 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:26:02.420 08:22:35 -- dd/common.sh@31 -- # xtrace_disable 00:26:02.420 08:22:35 -- common/autotest_common.sh@10 -- # set +x 00:26:02.420 [2024-04-17 08:22:35.701867] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:02.420 [2024-04-17 08:22:35.702030] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59168 ] 00:26:02.420 { 00:26:02.420 "subsystems": [ 00:26:02.420 { 00:26:02.420 "subsystem": "bdev", 00:26:02.420 "config": [ 00:26:02.420 { 00:26:02.420 "params": { 00:26:02.420 "trtype": "pcie", 00:26:02.420 "traddr": "0000:00:06.0", 00:26:02.420 "name": "Nvme0" 00:26:02.420 }, 00:26:02.420 "method": "bdev_nvme_attach_controller" 00:26:02.420 }, 00:26:02.420 { 00:26:02.420 "params": { 00:26:02.420 "trtype": "pcie", 00:26:02.420 "traddr": "0000:00:07.0", 00:26:02.420 "name": "Nvme1" 00:26:02.420 }, 00:26:02.420 "method": "bdev_nvme_attach_controller" 00:26:02.420 }, 00:26:02.420 { 00:26:02.420 "method": "bdev_wait_for_examine" 00:26:02.420 } 00:26:02.420 ] 00:26:02.420 } 00:26:02.420 ] 00:26:02.420 } 00:26:02.679 [2024-04-17 08:22:35.843452] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:02.679 [2024-04-17 08:22:35.942395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:03.197  Copying: 65/65 [MB] (average 812 MBps) 00:26:03.197 00:26:03.197 08:22:36 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:26:03.197 08:22:36 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:26:03.197 08:22:36 -- dd/common.sh@31 -- # xtrace_disable 00:26:03.197 08:22:36 -- common/autotest_common.sh@10 -- # set +x 00:26:03.457 [2024-04-17 08:22:36.553488] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:03.457 [2024-04-17 08:22:36.553560] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59182 ] 00:26:03.457 { 00:26:03.457 "subsystems": [ 00:26:03.457 { 00:26:03.457 "subsystem": "bdev", 00:26:03.457 "config": [ 00:26:03.457 { 00:26:03.457 "params": { 00:26:03.457 "trtype": "pcie", 00:26:03.457 "traddr": "0000:00:06.0", 00:26:03.457 "name": "Nvme0" 00:26:03.457 }, 00:26:03.457 "method": "bdev_nvme_attach_controller" 00:26:03.457 }, 00:26:03.457 { 00:26:03.457 "params": { 00:26:03.457 "trtype": "pcie", 00:26:03.457 "traddr": "0000:00:07.0", 00:26:03.457 "name": "Nvme1" 00:26:03.457 }, 00:26:03.457 "method": "bdev_nvme_attach_controller" 00:26:03.457 }, 00:26:03.457 { 00:26:03.457 "method": "bdev_wait_for_examine" 00:26:03.457 } 00:26:03.457 ] 00:26:03.457 } 00:26:03.457 ] 00:26:03.457 } 00:26:03.457 [2024-04-17 08:22:36.692973] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:03.717 [2024-04-17 08:22:36.794926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:03.977  Copying: 1024/1024 [kB] (average 1000 MBps) 00:26:03.977 00:26:03.977 ************************************ 00:26:03.977 END TEST dd_offset_magic 00:26:03.977 ************************************ 00:26:03.977 08:22:37 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:26:03.977 08:22:37 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:26:03.977 00:26:03.977 real 0m3.239s 00:26:03.977 user 0m2.413s 00:26:03.977 sys 0m0.649s 00:26:03.977 08:22:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:03.977 08:22:37 -- common/autotest_common.sh@10 -- # set +x 00:26:03.977 08:22:37 -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:26:03.977 08:22:37 -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:26:03.977 08:22:37 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:26:03.977 08:22:37 -- dd/common.sh@11 -- # local nvme_ref= 00:26:03.977 08:22:37 -- dd/common.sh@12 -- # local size=4194330 00:26:03.977 08:22:37 -- dd/common.sh@14 -- # local bs=1048576 00:26:03.977 08:22:37 -- dd/common.sh@15 -- # local count=5 00:26:03.977 08:22:37 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:26:03.977 08:22:37 -- dd/common.sh@18 -- # gen_conf 00:26:03.977 08:22:37 -- dd/common.sh@31 -- # xtrace_disable 00:26:03.977 08:22:37 -- common/autotest_common.sh@10 -- # set +x 00:26:04.236 [2024-04-17 08:22:37.323667] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:04.236 [2024-04-17 08:22:37.323727] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59217 ] 00:26:04.236 { 00:26:04.236 "subsystems": [ 00:26:04.236 { 00:26:04.236 "subsystem": "bdev", 00:26:04.236 "config": [ 00:26:04.236 { 00:26:04.236 "params": { 00:26:04.236 "trtype": "pcie", 00:26:04.236 "traddr": "0000:00:06.0", 00:26:04.236 "name": "Nvme0" 00:26:04.236 }, 00:26:04.236 "method": "bdev_nvme_attach_controller" 00:26:04.236 }, 00:26:04.236 { 00:26:04.236 "params": { 00:26:04.236 "trtype": "pcie", 00:26:04.236 "traddr": "0000:00:07.0", 00:26:04.236 "name": "Nvme1" 00:26:04.236 }, 00:26:04.236 "method": "bdev_nvme_attach_controller" 00:26:04.236 }, 00:26:04.236 { 00:26:04.236 "method": "bdev_wait_for_examine" 00:26:04.236 } 00:26:04.236 ] 00:26:04.236 } 00:26:04.236 ] 00:26:04.236 } 00:26:04.236 [2024-04-17 08:22:37.461460] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:04.236 [2024-04-17 08:22:37.552609] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:04.754  Copying: 5120/5120 [kB] (average 1000 MBps) 00:26:04.754 00:26:04.754 08:22:37 -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:26:04.754 08:22:37 -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:26:04.754 08:22:37 -- dd/common.sh@11 -- # local nvme_ref= 00:26:04.754 08:22:37 -- dd/common.sh@12 -- # local size=4194330 00:26:04.754 08:22:37 -- dd/common.sh@14 -- # local bs=1048576 00:26:04.754 08:22:37 -- dd/common.sh@15 -- # local count=5 00:26:04.754 08:22:37 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:26:04.754 08:22:37 -- dd/common.sh@18 -- # gen_conf 00:26:04.754 08:22:37 -- dd/common.sh@31 -- # xtrace_disable 00:26:04.754 08:22:37 -- common/autotest_common.sh@10 -- # set +x 00:26:04.754 [2024-04-17 08:22:38.034490] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:04.754 [2024-04-17 08:22:38.034558] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59237 ] 00:26:04.754 { 00:26:04.754 "subsystems": [ 00:26:04.754 { 00:26:04.754 "subsystem": "bdev", 00:26:04.754 "config": [ 00:26:04.754 { 00:26:04.754 "params": { 00:26:04.754 "trtype": "pcie", 00:26:04.754 "traddr": "0000:00:06.0", 00:26:04.754 "name": "Nvme0" 00:26:04.754 }, 00:26:04.754 "method": "bdev_nvme_attach_controller" 00:26:04.754 }, 00:26:04.754 { 00:26:04.754 "params": { 00:26:04.754 "trtype": "pcie", 00:26:04.754 "traddr": "0000:00:07.0", 00:26:04.754 "name": "Nvme1" 00:26:04.754 }, 00:26:04.754 "method": "bdev_nvme_attach_controller" 00:26:04.754 }, 00:26:04.754 { 00:26:04.754 "method": "bdev_wait_for_examine" 00:26:04.754 } 00:26:04.754 ] 00:26:04.754 } 00:26:04.754 ] 00:26:04.754 } 00:26:05.013 [2024-04-17 08:22:38.173412] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:05.013 [2024-04-17 08:22:38.272883] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:05.533  Copying: 5120/5120 [kB] (average 625 MBps) 00:26:05.533 00:26:05.533 08:22:38 -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:26:05.533 ************************************ 00:26:05.533 END TEST spdk_dd_bdev_to_bdev 00:26:05.533 ************************************ 00:26:05.533 00:26:05.533 real 0m7.373s 00:26:05.533 user 0m5.374s 00:26:05.533 sys 0m1.539s 00:26:05.533 08:22:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:05.533 08:22:38 -- common/autotest_common.sh@10 -- # set +x 00:26:05.533 08:22:38 -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:26:05.533 08:22:38 -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:26:05.533 08:22:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:05.533 08:22:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:05.533 08:22:38 -- common/autotest_common.sh@10 -- # set +x 00:26:05.533 ************************************ 00:26:05.533 START TEST spdk_dd_uring 00:26:05.533 ************************************ 00:26:05.533 08:22:38 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:26:05.792 * Looking for test storage... 00:26:05.792 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:26:05.792 08:22:38 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:05.792 08:22:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:05.792 08:22:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:05.792 08:22:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:05.792 08:22:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.792 08:22:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.792 08:22:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.792 08:22:38 -- paths/export.sh@5 -- # export PATH 00:26:05.792 08:22:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.792 08:22:38 -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:26:05.793 08:22:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:05.793 08:22:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:05.793 08:22:38 -- common/autotest_common.sh@10 -- # set +x 00:26:05.793 ************************************ 00:26:05.793 START TEST dd_uring_copy 00:26:05.793 ************************************ 00:26:05.793 08:22:38 -- common/autotest_common.sh@1104 -- # uring_zram_copy 00:26:05.793 08:22:38 -- dd/uring.sh@15 -- # local zram_dev_id 00:26:05.793 08:22:38 -- dd/uring.sh@16 -- # local magic 00:26:05.793 08:22:38 -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:26:05.793 08:22:38 -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:26:05.793 08:22:38 -- dd/uring.sh@19 -- # local verify_magic 00:26:05.793 08:22:38 -- dd/uring.sh@21 -- # init_zram 00:26:05.793 08:22:38 -- dd/common.sh@163 -- # [[ -e /sys/class/zram-control ]] 00:26:05.793 08:22:38 -- dd/common.sh@164 -- # return 00:26:05.793 08:22:38 -- dd/uring.sh@22 -- # create_zram_dev 00:26:05.793 08:22:38 -- dd/common.sh@168 -- # cat /sys/class/zram-control/hot_add 00:26:05.793 08:22:38 -- dd/uring.sh@22 -- # zram_dev_id=1 00:26:05.793 08:22:38 -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:26:05.793 08:22:38 -- dd/common.sh@181 -- # local id=1 00:26:05.793 08:22:38 -- dd/common.sh@182 -- # local size=512M 00:26:05.793 08:22:38 -- dd/common.sh@184 -- # [[ -e /sys/block/zram1 ]] 00:26:05.793 08:22:38 -- dd/common.sh@186 -- # echo 512M 00:26:05.793 08:22:38 -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:26:05.793 08:22:38 -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:26:05.793 08:22:38 -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:26:05.793 08:22:38 -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:26:05.793 08:22:38 -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:26:05.793 08:22:38 -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:26:05.793 08:22:38 -- dd/uring.sh@41 -- # gen_bytes 1024 00:26:05.793 08:22:38 -- dd/common.sh@98 -- # xtrace_disable 00:26:05.793 08:22:38 -- common/autotest_common.sh@10 -- # set +x 00:26:05.793 08:22:38 -- dd/uring.sh@41 -- # magic=tffzxzgmojd6k59fmjhdxqu7zi0ca2y81jzp2gf40yyixck44dus4qjqxlyxu8m3nkrk23jm4m51ks9mu9rjp1d3lg46xm4s377rgt4cw1mbl54atcaih3wv296j0owshvph4i358ovi8m30l9vp0j9zaq73v6a3pqy31no8u0j6vzfuhqvbdad4peahu2n4xzx96d2aes7jp25cuyeieajaqsc78iqc1huk6tjdzz1wviq0acwhbwtkr730n2vdoqed8pz2rgmzcsq3eminmtgjwgwvz81opdbr35cw4xv186qlyrwj8smgkhbe6cx6fhnd93xnp43numri1idt413m027z4sx9vhsci1r4iek13sk742bb11lvds2q3qv73k9a1x8o7dvrzhcxzfbzpvoe8ugul00l69lo1pafv7xkiigiwsnb52dip7s66ac4193lp5kjb4uj7xsu63waf7wrwzwa8kfapmyzqxuy9a43ezgs03e41ax5f51xtr53inc0q469q4djhe4u5bumgpk5c4onyuxxykwusjt32yx2vyrgy5vjzjyeofmfj5ux2tmwj583j5iejjk1sb8qc1ilnqo5mdxejlmq1vl56nhgywuj2b6amefa7y18sspkq5ql5vtzn28cogsr2x7fcde6eqbbyyc6ibr4qarp7sckarulkwe4v1nlrwkg4a063bplhn8om7trcxhn8synvwpw7g0o7kc74v01tbrmmp7rzylx5uujddzptdztidgk4amal9wnchzdvt70r6z2o66etgule7ppbnkgv46408i8tjemcflfq09jjq833mqcvw825knelugtjdirqd8pagz1uhe0wmhk30qumk49nmc6bn26ii5j4c2ykjhbn6egn90trtzmvjtn6uke80wibgoq4h4ypnzmh8xbfc90kai314t3qxcc4nmr9lkcgo3zr054m2mqbua7c39728d9olj12d7r11qxhx108weq17onfg5w1ndr85bn4bqim2vn 00:26:05.793 08:22:38 -- dd/uring.sh@42 -- # echo tffzxzgmojd6k59fmjhdxqu7zi0ca2y81jzp2gf40yyixck44dus4qjqxlyxu8m3nkrk23jm4m51ks9mu9rjp1d3lg46xm4s377rgt4cw1mbl54atcaih3wv296j0owshvph4i358ovi8m30l9vp0j9zaq73v6a3pqy31no8u0j6vzfuhqvbdad4peahu2n4xzx96d2aes7jp25cuyeieajaqsc78iqc1huk6tjdzz1wviq0acwhbwtkr730n2vdoqed8pz2rgmzcsq3eminmtgjwgwvz81opdbr35cw4xv186qlyrwj8smgkhbe6cx6fhnd93xnp43numri1idt413m027z4sx9vhsci1r4iek13sk742bb11lvds2q3qv73k9a1x8o7dvrzhcxzfbzpvoe8ugul00l69lo1pafv7xkiigiwsnb52dip7s66ac4193lp5kjb4uj7xsu63waf7wrwzwa8kfapmyzqxuy9a43ezgs03e41ax5f51xtr53inc0q469q4djhe4u5bumgpk5c4onyuxxykwusjt32yx2vyrgy5vjzjyeofmfj5ux2tmwj583j5iejjk1sb8qc1ilnqo5mdxejlmq1vl56nhgywuj2b6amefa7y18sspkq5ql5vtzn28cogsr2x7fcde6eqbbyyc6ibr4qarp7sckarulkwe4v1nlrwkg4a063bplhn8om7trcxhn8synvwpw7g0o7kc74v01tbrmmp7rzylx5uujddzptdztidgk4amal9wnchzdvt70r6z2o66etgule7ppbnkgv46408i8tjemcflfq09jjq833mqcvw825knelugtjdirqd8pagz1uhe0wmhk30qumk49nmc6bn26ii5j4c2ykjhbn6egn90trtzmvjtn6uke80wibgoq4h4ypnzmh8xbfc90kai314t3qxcc4nmr9lkcgo3zr054m2mqbua7c39728d9olj12d7r11qxhx108weq17onfg5w1ndr85bn4bqim2vn 00:26:05.793 08:22:38 -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:26:05.793 [2024-04-17 08:22:38.996051] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:05.793 [2024-04-17 08:22:38.996165] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59304 ] 00:26:06.051 [2024-04-17 08:22:39.126375] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:06.051 [2024-04-17 08:22:39.241807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:06.876  Copying: 511/511 [MB] (average 1615 MBps) 00:26:06.876 00:26:06.876 08:22:40 -- dd/uring.sh@54 -- # gen_conf 00:26:06.876 08:22:40 -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:26:06.876 08:22:40 -- dd/common.sh@31 -- # xtrace_disable 00:26:06.876 08:22:40 -- common/autotest_common.sh@10 -- # set +x 00:26:06.876 [2024-04-17 08:22:40.161821] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:06.876 [2024-04-17 08:22:40.161999] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59319 ] 00:26:06.876 { 00:26:06.876 "subsystems": [ 00:26:06.876 { 00:26:06.876 "subsystem": "bdev", 00:26:06.876 "config": [ 00:26:06.876 { 00:26:06.876 "params": { 00:26:06.876 "block_size": 512, 00:26:06.876 "num_blocks": 1048576, 00:26:06.876 "name": "malloc0" 00:26:06.876 }, 00:26:06.876 "method": "bdev_malloc_create" 00:26:06.876 }, 00:26:06.876 { 00:26:06.876 "params": { 00:26:06.876 "filename": "/dev/zram1", 00:26:06.876 "name": "uring0" 00:26:06.876 }, 00:26:06.876 "method": "bdev_uring_create" 00:26:06.876 }, 00:26:06.876 { 00:26:06.876 "method": "bdev_wait_for_examine" 00:26:06.876 } 00:26:06.876 ] 00:26:06.876 } 00:26:06.876 ] 00:26:06.876 } 00:26:07.134 [2024-04-17 08:22:40.300839] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:07.134 [2024-04-17 08:22:40.409382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:09.973  Copying: 233/512 [MB] (233 MBps) Copying: 468/512 [MB] (235 MBps) Copying: 512/512 [MB] (average 235 MBps) 00:26:09.973 00:26:09.973 08:22:43 -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:26:09.973 08:22:43 -- dd/uring.sh@60 -- # gen_conf 00:26:09.973 08:22:43 -- dd/common.sh@31 -- # xtrace_disable 00:26:09.973 08:22:43 -- common/autotest_common.sh@10 -- # set +x 00:26:09.973 [2024-04-17 08:22:43.246580] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:09.974 [2024-04-17 08:22:43.246723] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59357 ] 00:26:09.974 { 00:26:09.974 "subsystems": [ 00:26:09.974 { 00:26:09.974 "subsystem": "bdev", 00:26:09.974 "config": [ 00:26:09.974 { 00:26:09.974 "params": { 00:26:09.974 "block_size": 512, 00:26:09.974 "num_blocks": 1048576, 00:26:09.974 "name": "malloc0" 00:26:09.974 }, 00:26:09.974 "method": "bdev_malloc_create" 00:26:09.974 }, 00:26:09.974 { 00:26:09.974 "params": { 00:26:09.974 "filename": "/dev/zram1", 00:26:09.974 "name": "uring0" 00:26:09.974 }, 00:26:09.974 "method": "bdev_uring_create" 00:26:09.974 }, 00:26:09.974 { 00:26:09.974 "method": "bdev_wait_for_examine" 00:26:09.974 } 00:26:09.974 ] 00:26:09.974 } 00:26:09.974 ] 00:26:09.974 } 00:26:10.233 [2024-04-17 08:22:43.386909] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:10.233 [2024-04-17 08:22:43.495279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:13.741  Copying: 181/512 [MB] (181 MBps) Copying: 369/512 [MB] (187 MBps) Copying: 512/512 [MB] (average 184 MBps) 00:26:13.741 00:26:13.741 08:22:46 -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:26:13.741 08:22:46 -- dd/uring.sh@66 -- # [[ tffzxzgmojd6k59fmjhdxqu7zi0ca2y81jzp2gf40yyixck44dus4qjqxlyxu8m3nkrk23jm4m51ks9mu9rjp1d3lg46xm4s377rgt4cw1mbl54atcaih3wv296j0owshvph4i358ovi8m30l9vp0j9zaq73v6a3pqy31no8u0j6vzfuhqvbdad4peahu2n4xzx96d2aes7jp25cuyeieajaqsc78iqc1huk6tjdzz1wviq0acwhbwtkr730n2vdoqed8pz2rgmzcsq3eminmtgjwgwvz81opdbr35cw4xv186qlyrwj8smgkhbe6cx6fhnd93xnp43numri1idt413m027z4sx9vhsci1r4iek13sk742bb11lvds2q3qv73k9a1x8o7dvrzhcxzfbzpvoe8ugul00l69lo1pafv7xkiigiwsnb52dip7s66ac4193lp5kjb4uj7xsu63waf7wrwzwa8kfapmyzqxuy9a43ezgs03e41ax5f51xtr53inc0q469q4djhe4u5bumgpk5c4onyuxxykwusjt32yx2vyrgy5vjzjyeofmfj5ux2tmwj583j5iejjk1sb8qc1ilnqo5mdxejlmq1vl56nhgywuj2b6amefa7y18sspkq5ql5vtzn28cogsr2x7fcde6eqbbyyc6ibr4qarp7sckarulkwe4v1nlrwkg4a063bplhn8om7trcxhn8synvwpw7g0o7kc74v01tbrmmp7rzylx5uujddzptdztidgk4amal9wnchzdvt70r6z2o66etgule7ppbnkgv46408i8tjemcflfq09jjq833mqcvw825knelugtjdirqd8pagz1uhe0wmhk30qumk49nmc6bn26ii5j4c2ykjhbn6egn90trtzmvjtn6uke80wibgoq4h4ypnzmh8xbfc90kai314t3qxcc4nmr9lkcgo3zr054m2mqbua7c39728d9olj12d7r11qxhx108weq17onfg5w1ndr85bn4bqim2vn == \t\f\f\z\x\z\g\m\o\j\d\6\k\5\9\f\m\j\h\d\x\q\u\7\z\i\0\c\a\2\y\8\1\j\z\p\2\g\f\4\0\y\y\i\x\c\k\4\4\d\u\s\4\q\j\q\x\l\y\x\u\8\m\3\n\k\r\k\2\3\j\m\4\m\5\1\k\s\9\m\u\9\r\j\p\1\d\3\l\g\4\6\x\m\4\s\3\7\7\r\g\t\4\c\w\1\m\b\l\5\4\a\t\c\a\i\h\3\w\v\2\9\6\j\0\o\w\s\h\v\p\h\4\i\3\5\8\o\v\i\8\m\3\0\l\9\v\p\0\j\9\z\a\q\7\3\v\6\a\3\p\q\y\3\1\n\o\8\u\0\j\6\v\z\f\u\h\q\v\b\d\a\d\4\p\e\a\h\u\2\n\4\x\z\x\9\6\d\2\a\e\s\7\j\p\2\5\c\u\y\e\i\e\a\j\a\q\s\c\7\8\i\q\c\1\h\u\k\6\t\j\d\z\z\1\w\v\i\q\0\a\c\w\h\b\w\t\k\r\7\3\0\n\2\v\d\o\q\e\d\8\p\z\2\r\g\m\z\c\s\q\3\e\m\i\n\m\t\g\j\w\g\w\v\z\8\1\o\p\d\b\r\3\5\c\w\4\x\v\1\8\6\q\l\y\r\w\j\8\s\m\g\k\h\b\e\6\c\x\6\f\h\n\d\9\3\x\n\p\4\3\n\u\m\r\i\1\i\d\t\4\1\3\m\0\2\7\z\4\s\x\9\v\h\s\c\i\1\r\4\i\e\k\1\3\s\k\7\4\2\b\b\1\1\l\v\d\s\2\q\3\q\v\7\3\k\9\a\1\x\8\o\7\d\v\r\z\h\c\x\z\f\b\z\p\v\o\e\8\u\g\u\l\0\0\l\6\9\l\o\1\p\a\f\v\7\x\k\i\i\g\i\w\s\n\b\5\2\d\i\p\7\s\6\6\a\c\4\1\9\3\l\p\5\k\j\b\4\u\j\7\x\s\u\6\3\w\a\f\7\w\r\w\z\w\a\8\k\f\a\p\m\y\z\q\x\u\y\9\a\4\3\e\z\g\s\0\3\e\4\1\a\x\5\f\5\1\x\t\r\5\3\i\n\c\0\q\4\6\9\q\4\d\j\h\e\4\u\5\b\u\m\g\p\k\5\c\4\o\n\y\u\x\x\y\k\w\u\s\j\t\3\2\y\x\2\v\y\r\g\y\5\v\j\z\j\y\e\o\f\m\f\j\5\u\x\2\t\m\w\j\5\8\3\j\5\i\e\j\j\k\1\s\b\8\q\c\1\i\l\n\q\o\5\m\d\x\e\j\l\m\q\1\v\l\5\6\n\h\g\y\w\u\j\2\b\6\a\m\e\f\a\7\y\1\8\s\s\p\k\q\5\q\l\5\v\t\z\n\2\8\c\o\g\s\r\2\x\7\f\c\d\e\6\e\q\b\b\y\y\c\6\i\b\r\4\q\a\r\p\7\s\c\k\a\r\u\l\k\w\e\4\v\1\n\l\r\w\k\g\4\a\0\6\3\b\p\l\h\n\8\o\m\7\t\r\c\x\h\n\8\s\y\n\v\w\p\w\7\g\0\o\7\k\c\7\4\v\0\1\t\b\r\m\m\p\7\r\z\y\l\x\5\u\u\j\d\d\z\p\t\d\z\t\i\d\g\k\4\a\m\a\l\9\w\n\c\h\z\d\v\t\7\0\r\6\z\2\o\6\6\e\t\g\u\l\e\7\p\p\b\n\k\g\v\4\6\4\0\8\i\8\t\j\e\m\c\f\l\f\q\0\9\j\j\q\8\3\3\m\q\c\v\w\8\2\5\k\n\e\l\u\g\t\j\d\i\r\q\d\8\p\a\g\z\1\u\h\e\0\w\m\h\k\3\0\q\u\m\k\4\9\n\m\c\6\b\n\2\6\i\i\5\j\4\c\2\y\k\j\h\b\n\6\e\g\n\9\0\t\r\t\z\m\v\j\t\n\6\u\k\e\8\0\w\i\b\g\o\q\4\h\4\y\p\n\z\m\h\8\x\b\f\c\9\0\k\a\i\3\1\4\t\3\q\x\c\c\4\n\m\r\9\l\k\c\g\o\3\z\r\0\5\4\m\2\m\q\b\u\a\7\c\3\9\7\2\8\d\9\o\l\j\1\2\d\7\r\1\1\q\x\h\x\1\0\8\w\e\q\1\7\o\n\f\g\5\w\1\n\d\r\8\5\b\n\4\b\q\i\m\2\v\n ]] 00:26:13.741 08:22:46 -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:26:13.741 08:22:46 -- dd/uring.sh@69 -- # [[ tffzxzgmojd6k59fmjhdxqu7zi0ca2y81jzp2gf40yyixck44dus4qjqxlyxu8m3nkrk23jm4m51ks9mu9rjp1d3lg46xm4s377rgt4cw1mbl54atcaih3wv296j0owshvph4i358ovi8m30l9vp0j9zaq73v6a3pqy31no8u0j6vzfuhqvbdad4peahu2n4xzx96d2aes7jp25cuyeieajaqsc78iqc1huk6tjdzz1wviq0acwhbwtkr730n2vdoqed8pz2rgmzcsq3eminmtgjwgwvz81opdbr35cw4xv186qlyrwj8smgkhbe6cx6fhnd93xnp43numri1idt413m027z4sx9vhsci1r4iek13sk742bb11lvds2q3qv73k9a1x8o7dvrzhcxzfbzpvoe8ugul00l69lo1pafv7xkiigiwsnb52dip7s66ac4193lp5kjb4uj7xsu63waf7wrwzwa8kfapmyzqxuy9a43ezgs03e41ax5f51xtr53inc0q469q4djhe4u5bumgpk5c4onyuxxykwusjt32yx2vyrgy5vjzjyeofmfj5ux2tmwj583j5iejjk1sb8qc1ilnqo5mdxejlmq1vl56nhgywuj2b6amefa7y18sspkq5ql5vtzn28cogsr2x7fcde6eqbbyyc6ibr4qarp7sckarulkwe4v1nlrwkg4a063bplhn8om7trcxhn8synvwpw7g0o7kc74v01tbrmmp7rzylx5uujddzptdztidgk4amal9wnchzdvt70r6z2o66etgule7ppbnkgv46408i8tjemcflfq09jjq833mqcvw825knelugtjdirqd8pagz1uhe0wmhk30qumk49nmc6bn26ii5j4c2ykjhbn6egn90trtzmvjtn6uke80wibgoq4h4ypnzmh8xbfc90kai314t3qxcc4nmr9lkcgo3zr054m2mqbua7c39728d9olj12d7r11qxhx108weq17onfg5w1ndr85bn4bqim2vn == \t\f\f\z\x\z\g\m\o\j\d\6\k\5\9\f\m\j\h\d\x\q\u\7\z\i\0\c\a\2\y\8\1\j\z\p\2\g\f\4\0\y\y\i\x\c\k\4\4\d\u\s\4\q\j\q\x\l\y\x\u\8\m\3\n\k\r\k\2\3\j\m\4\m\5\1\k\s\9\m\u\9\r\j\p\1\d\3\l\g\4\6\x\m\4\s\3\7\7\r\g\t\4\c\w\1\m\b\l\5\4\a\t\c\a\i\h\3\w\v\2\9\6\j\0\o\w\s\h\v\p\h\4\i\3\5\8\o\v\i\8\m\3\0\l\9\v\p\0\j\9\z\a\q\7\3\v\6\a\3\p\q\y\3\1\n\o\8\u\0\j\6\v\z\f\u\h\q\v\b\d\a\d\4\p\e\a\h\u\2\n\4\x\z\x\9\6\d\2\a\e\s\7\j\p\2\5\c\u\y\e\i\e\a\j\a\q\s\c\7\8\i\q\c\1\h\u\k\6\t\j\d\z\z\1\w\v\i\q\0\a\c\w\h\b\w\t\k\r\7\3\0\n\2\v\d\o\q\e\d\8\p\z\2\r\g\m\z\c\s\q\3\e\m\i\n\m\t\g\j\w\g\w\v\z\8\1\o\p\d\b\r\3\5\c\w\4\x\v\1\8\6\q\l\y\r\w\j\8\s\m\g\k\h\b\e\6\c\x\6\f\h\n\d\9\3\x\n\p\4\3\n\u\m\r\i\1\i\d\t\4\1\3\m\0\2\7\z\4\s\x\9\v\h\s\c\i\1\r\4\i\e\k\1\3\s\k\7\4\2\b\b\1\1\l\v\d\s\2\q\3\q\v\7\3\k\9\a\1\x\8\o\7\d\v\r\z\h\c\x\z\f\b\z\p\v\o\e\8\u\g\u\l\0\0\l\6\9\l\o\1\p\a\f\v\7\x\k\i\i\g\i\w\s\n\b\5\2\d\i\p\7\s\6\6\a\c\4\1\9\3\l\p\5\k\j\b\4\u\j\7\x\s\u\6\3\w\a\f\7\w\r\w\z\w\a\8\k\f\a\p\m\y\z\q\x\u\y\9\a\4\3\e\z\g\s\0\3\e\4\1\a\x\5\f\5\1\x\t\r\5\3\i\n\c\0\q\4\6\9\q\4\d\j\h\e\4\u\5\b\u\m\g\p\k\5\c\4\o\n\y\u\x\x\y\k\w\u\s\j\t\3\2\y\x\2\v\y\r\g\y\5\v\j\z\j\y\e\o\f\m\f\j\5\u\x\2\t\m\w\j\5\8\3\j\5\i\e\j\j\k\1\s\b\8\q\c\1\i\l\n\q\o\5\m\d\x\e\j\l\m\q\1\v\l\5\6\n\h\g\y\w\u\j\2\b\6\a\m\e\f\a\7\y\1\8\s\s\p\k\q\5\q\l\5\v\t\z\n\2\8\c\o\g\s\r\2\x\7\f\c\d\e\6\e\q\b\b\y\y\c\6\i\b\r\4\q\a\r\p\7\s\c\k\a\r\u\l\k\w\e\4\v\1\n\l\r\w\k\g\4\a\0\6\3\b\p\l\h\n\8\o\m\7\t\r\c\x\h\n\8\s\y\n\v\w\p\w\7\g\0\o\7\k\c\7\4\v\0\1\t\b\r\m\m\p\7\r\z\y\l\x\5\u\u\j\d\d\z\p\t\d\z\t\i\d\g\k\4\a\m\a\l\9\w\n\c\h\z\d\v\t\7\0\r\6\z\2\o\6\6\e\t\g\u\l\e\7\p\p\b\n\k\g\v\4\6\4\0\8\i\8\t\j\e\m\c\f\l\f\q\0\9\j\j\q\8\3\3\m\q\c\v\w\8\2\5\k\n\e\l\u\g\t\j\d\i\r\q\d\8\p\a\g\z\1\u\h\e\0\w\m\h\k\3\0\q\u\m\k\4\9\n\m\c\6\b\n\2\6\i\i\5\j\4\c\2\y\k\j\h\b\n\6\e\g\n\9\0\t\r\t\z\m\v\j\t\n\6\u\k\e\8\0\w\i\b\g\o\q\4\h\4\y\p\n\z\m\h\8\x\b\f\c\9\0\k\a\i\3\1\4\t\3\q\x\c\c\4\n\m\r\9\l\k\c\g\o\3\z\r\0\5\4\m\2\m\q\b\u\a\7\c\3\9\7\2\8\d\9\o\l\j\1\2\d\7\r\1\1\q\x\h\x\1\0\8\w\e\q\1\7\o\n\f\g\5\w\1\n\d\r\8\5\b\n\4\b\q\i\m\2\v\n ]] 00:26:13.741 08:22:46 -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:26:14.000 08:22:47 -- dd/uring.sh@75 -- # gen_conf 00:26:14.000 08:22:47 -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:26:14.000 08:22:47 -- dd/common.sh@31 -- # xtrace_disable 00:26:14.000 08:22:47 -- common/autotest_common.sh@10 -- # set +x 00:26:14.000 [2024-04-17 08:22:47.163411] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:14.000 [2024-04-17 08:22:47.163476] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59442 ] 00:26:14.000 { 00:26:14.000 "subsystems": [ 00:26:14.000 { 00:26:14.000 "subsystem": "bdev", 00:26:14.000 "config": [ 00:26:14.000 { 00:26:14.000 "params": { 00:26:14.000 "block_size": 512, 00:26:14.000 "num_blocks": 1048576, 00:26:14.000 "name": "malloc0" 00:26:14.000 }, 00:26:14.000 "method": "bdev_malloc_create" 00:26:14.000 }, 00:26:14.000 { 00:26:14.000 "params": { 00:26:14.000 "filename": "/dev/zram1", 00:26:14.000 "name": "uring0" 00:26:14.000 }, 00:26:14.000 "method": "bdev_uring_create" 00:26:14.000 }, 00:26:14.000 { 00:26:14.000 "method": "bdev_wait_for_examine" 00:26:14.000 } 00:26:14.000 ] 00:26:14.000 } 00:26:14.000 ] 00:26:14.000 } 00:26:14.000 [2024-04-17 08:22:47.302566] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:14.258 [2024-04-17 08:22:47.398197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:17.397  Copying: 196/512 [MB] (196 MBps) Copying: 385/512 [MB] (188 MBps) Copying: 512/512 [MB] (average 192 MBps) 00:26:17.397 00:26:17.397 08:22:50 -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:26:17.397 08:22:50 -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:26:17.397 08:22:50 -- dd/uring.sh@87 -- # : 00:26:17.397 08:22:50 -- dd/uring.sh@87 -- # : 00:26:17.397 08:22:50 -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:26:17.397 08:22:50 -- dd/uring.sh@87 -- # gen_conf 00:26:17.397 08:22:50 -- dd/common.sh@31 -- # xtrace_disable 00:26:17.397 08:22:50 -- common/autotest_common.sh@10 -- # set +x 00:26:17.397 [2024-04-17 08:22:50.661680] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:17.397 [2024-04-17 08:22:50.661851] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59487 ] 00:26:17.397 { 00:26:17.397 "subsystems": [ 00:26:17.397 { 00:26:17.397 "subsystem": "bdev", 00:26:17.397 "config": [ 00:26:17.397 { 00:26:17.397 "params": { 00:26:17.397 "block_size": 512, 00:26:17.397 "num_blocks": 1048576, 00:26:17.397 "name": "malloc0" 00:26:17.397 }, 00:26:17.397 "method": "bdev_malloc_create" 00:26:17.397 }, 00:26:17.397 { 00:26:17.397 "params": { 00:26:17.397 "filename": "/dev/zram1", 00:26:17.397 "name": "uring0" 00:26:17.397 }, 00:26:17.397 "method": "bdev_uring_create" 00:26:17.397 }, 00:26:17.397 { 00:26:17.397 "params": { 00:26:17.397 "name": "uring0" 00:26:17.397 }, 00:26:17.397 "method": "bdev_uring_delete" 00:26:17.397 }, 00:26:17.397 { 00:26:17.397 "method": "bdev_wait_for_examine" 00:26:17.397 } 00:26:17.397 ] 00:26:17.397 } 00:26:17.397 ] 00:26:17.397 } 00:26:17.655 [2024-04-17 08:22:50.801495] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:17.655 [2024-04-17 08:22:50.904708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:18.482  Copying: 0/0 [B] (average 0 Bps) 00:26:18.482 00:26:18.482 08:22:51 -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:26:18.482 08:22:51 -- common/autotest_common.sh@640 -- # local es=0 00:26:18.482 08:22:51 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:26:18.482 08:22:51 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:18.482 08:22:51 -- dd/uring.sh@94 -- # : 00:26:18.482 08:22:51 -- dd/uring.sh@94 -- # gen_conf 00:26:18.482 08:22:51 -- dd/common.sh@31 -- # xtrace_disable 00:26:18.482 08:22:51 -- common/autotest_common.sh@10 -- # set +x 00:26:18.482 08:22:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:18.482 08:22:51 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:18.482 08:22:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:18.482 08:22:51 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:18.482 08:22:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:18.482 08:22:51 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:18.482 08:22:51 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:26:18.482 08:22:51 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:26:18.482 [2024-04-17 08:22:51.568244] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:18.482 [2024-04-17 08:22:51.568457] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59517 ] 00:26:18.482 { 00:26:18.482 "subsystems": [ 00:26:18.482 { 00:26:18.482 "subsystem": "bdev", 00:26:18.482 "config": [ 00:26:18.482 { 00:26:18.482 "params": { 00:26:18.482 "block_size": 512, 00:26:18.482 "num_blocks": 1048576, 00:26:18.482 "name": "malloc0" 00:26:18.482 }, 00:26:18.482 "method": "bdev_malloc_create" 00:26:18.482 }, 00:26:18.482 { 00:26:18.482 "params": { 00:26:18.482 "filename": "/dev/zram1", 00:26:18.482 "name": "uring0" 00:26:18.482 }, 00:26:18.482 "method": "bdev_uring_create" 00:26:18.482 }, 00:26:18.482 { 00:26:18.482 "params": { 00:26:18.482 "name": "uring0" 00:26:18.482 }, 00:26:18.482 "method": "bdev_uring_delete" 00:26:18.482 }, 00:26:18.482 { 00:26:18.482 "method": "bdev_wait_for_examine" 00:26:18.482 } 00:26:18.482 ] 00:26:18.482 } 00:26:18.482 ] 00:26:18.482 } 00:26:18.740 [2024-04-17 08:22:51.833057] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:18.740 [2024-04-17 08:22:51.925452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:18.998 [2024-04-17 08:22:52.129880] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:26:18.998 [2024-04-17 08:22:52.129935] spdk_dd.c: 932:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:26:18.998 [2024-04-17 08:22:52.129944] spdk_dd.c:1074:dd_run: *ERROR*: uring0: No such device 00:26:18.998 [2024-04-17 08:22:52.129951] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:19.256 [2024-04-17 08:22:52.378423] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:26:19.256 08:22:52 -- common/autotest_common.sh@643 -- # es=237 00:26:19.256 08:22:52 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:19.256 08:22:52 -- common/autotest_common.sh@652 -- # es=109 00:26:19.256 08:22:52 -- common/autotest_common.sh@653 -- # case "$es" in 00:26:19.256 08:22:52 -- common/autotest_common.sh@660 -- # es=1 00:26:19.256 08:22:52 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:19.256 08:22:52 -- dd/uring.sh@99 -- # remove_zram_dev 1 00:26:19.256 08:22:52 -- dd/common.sh@172 -- # local id=1 00:26:19.256 08:22:52 -- dd/common.sh@174 -- # [[ -e /sys/block/zram1 ]] 00:26:19.256 08:22:52 -- dd/common.sh@176 -- # echo 1 00:26:19.256 08:22:52 -- dd/common.sh@177 -- # echo 1 00:26:19.256 08:22:52 -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:26:19.514 00:26:19.514 real 0m13.744s 00:26:19.514 ************************************ 00:26:19.514 END TEST dd_uring_copy 00:26:19.514 ************************************ 00:26:19.514 user 0m8.084s 00:26:19.514 sys 0m4.882s 00:26:19.514 08:22:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:19.514 08:22:52 -- common/autotest_common.sh@10 -- # set +x 00:26:19.514 ************************************ 00:26:19.514 END TEST spdk_dd_uring 00:26:19.514 ************************************ 00:26:19.514 00:26:19.514 real 0m13.913s 00:26:19.514 user 0m8.151s 00:26:19.514 sys 0m4.992s 00:26:19.514 08:22:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:19.514 08:22:52 -- common/autotest_common.sh@10 -- # set +x 00:26:19.514 08:22:52 -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:26:19.514 08:22:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:19.514 08:22:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:19.514 08:22:52 -- common/autotest_common.sh@10 -- # set +x 00:26:19.514 ************************************ 00:26:19.514 START TEST spdk_dd_sparse 00:26:19.514 ************************************ 00:26:19.514 08:22:52 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:26:19.773 * Looking for test storage... 00:26:19.773 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:26:19.773 08:22:52 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:19.773 08:22:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:19.773 08:22:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:19.773 08:22:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:19.773 08:22:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:19.774 08:22:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:19.774 08:22:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:19.774 08:22:52 -- paths/export.sh@5 -- # export PATH 00:26:19.774 08:22:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:19.774 08:22:52 -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:26:19.774 08:22:52 -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:26:19.774 08:22:52 -- dd/sparse.sh@110 -- # file1=file_zero1 00:26:19.774 08:22:52 -- dd/sparse.sh@111 -- # file2=file_zero2 00:26:19.774 08:22:52 -- dd/sparse.sh@112 -- # file3=file_zero3 00:26:19.774 08:22:52 -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:26:19.774 08:22:52 -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:26:19.774 08:22:52 -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:26:19.774 08:22:52 -- dd/sparse.sh@118 -- # prepare 00:26:19.774 08:22:52 -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:26:19.774 08:22:52 -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:26:19.774 1+0 records in 00:26:19.774 1+0 records out 00:26:19.774 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00414867 s, 1.0 GB/s 00:26:19.774 08:22:52 -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:26:19.774 1+0 records in 00:26:19.774 1+0 records out 00:26:19.774 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00400761 s, 1.0 GB/s 00:26:19.774 08:22:52 -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:26:19.774 1+0 records in 00:26:19.774 1+0 records out 00:26:19.774 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00606114 s, 692 MB/s 00:26:19.774 08:22:52 -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:26:19.774 08:22:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:19.774 08:22:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:19.774 08:22:52 -- common/autotest_common.sh@10 -- # set +x 00:26:19.774 ************************************ 00:26:19.774 START TEST dd_sparse_file_to_file 00:26:19.774 ************************************ 00:26:19.774 08:22:52 -- common/autotest_common.sh@1104 -- # file_to_file 00:26:19.774 08:22:52 -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:26:19.774 08:22:52 -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:26:19.774 08:22:52 -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:26:19.774 08:22:52 -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:26:19.774 08:22:52 -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:26:19.774 08:22:52 -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:26:19.774 08:22:52 -- dd/sparse.sh@41 -- # gen_conf 00:26:19.774 08:22:52 -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:26:19.774 08:22:52 -- dd/common.sh@31 -- # xtrace_disable 00:26:19.774 08:22:52 -- common/autotest_common.sh@10 -- # set +x 00:26:19.774 [2024-04-17 08:22:52.953793] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:19.774 [2024-04-17 08:22:52.953948] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59609 ] 00:26:19.774 { 00:26:19.774 "subsystems": [ 00:26:19.774 { 00:26:19.774 "subsystem": "bdev", 00:26:19.774 "config": [ 00:26:19.774 { 00:26:19.774 "params": { 00:26:19.774 "block_size": 4096, 00:26:19.774 "filename": "dd_sparse_aio_disk", 00:26:19.774 "name": "dd_aio" 00:26:19.774 }, 00:26:19.774 "method": "bdev_aio_create" 00:26:19.774 }, 00:26:19.774 { 00:26:19.774 "params": { 00:26:19.774 "lvs_name": "dd_lvstore", 00:26:19.774 "bdev_name": "dd_aio" 00:26:19.774 }, 00:26:19.774 "method": "bdev_lvol_create_lvstore" 00:26:19.774 }, 00:26:19.774 { 00:26:19.774 "method": "bdev_wait_for_examine" 00:26:19.774 } 00:26:19.774 ] 00:26:19.774 } 00:26:19.774 ] 00:26:19.774 } 00:26:19.774 [2024-04-17 08:22:53.092152] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:20.032 [2024-04-17 08:22:53.190674] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:20.290  Copying: 12/36 [MB] (average 1333 MBps) 00:26:20.290 00:26:20.290 08:22:53 -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:26:20.290 08:22:53 -- dd/sparse.sh@47 -- # stat1_s=37748736 00:26:20.290 08:22:53 -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:26:20.290 08:22:53 -- dd/sparse.sh@48 -- # stat2_s=37748736 00:26:20.290 08:22:53 -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:26:20.290 08:22:53 -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:26:20.290 08:22:53 -- dd/sparse.sh@52 -- # stat1_b=24576 00:26:20.290 08:22:53 -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:26:20.290 ************************************ 00:26:20.290 END TEST dd_sparse_file_to_file 00:26:20.290 ************************************ 00:26:20.290 08:22:53 -- dd/sparse.sh@53 -- # stat2_b=24576 00:26:20.290 08:22:53 -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:26:20.290 00:26:20.290 real 0m0.679s 00:26:20.290 user 0m0.424s 00:26:20.290 sys 0m0.158s 00:26:20.290 08:22:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:20.290 08:22:53 -- common/autotest_common.sh@10 -- # set +x 00:26:20.548 08:22:53 -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:26:20.548 08:22:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:20.548 08:22:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:20.548 08:22:53 -- common/autotest_common.sh@10 -- # set +x 00:26:20.548 ************************************ 00:26:20.548 START TEST dd_sparse_file_to_bdev 00:26:20.548 ************************************ 00:26:20.548 08:22:53 -- common/autotest_common.sh@1104 -- # file_to_bdev 00:26:20.548 08:22:53 -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:26:20.548 08:22:53 -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:26:20.548 08:22:53 -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size']='37748736' ['thin_provision']='true') 00:26:20.548 08:22:53 -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:26:20.548 08:22:53 -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:26:20.548 08:22:53 -- dd/sparse.sh@73 -- # gen_conf 00:26:20.548 08:22:53 -- dd/common.sh@31 -- # xtrace_disable 00:26:20.548 08:22:53 -- common/autotest_common.sh@10 -- # set +x 00:26:20.548 [2024-04-17 08:22:53.689447] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:20.548 [2024-04-17 08:22:53.689521] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59651 ] 00:26:20.548 { 00:26:20.548 "subsystems": [ 00:26:20.548 { 00:26:20.548 "subsystem": "bdev", 00:26:20.548 "config": [ 00:26:20.548 { 00:26:20.548 "params": { 00:26:20.548 "block_size": 4096, 00:26:20.548 "filename": "dd_sparse_aio_disk", 00:26:20.548 "name": "dd_aio" 00:26:20.548 }, 00:26:20.548 "method": "bdev_aio_create" 00:26:20.548 }, 00:26:20.548 { 00:26:20.548 "params": { 00:26:20.548 "lvs_name": "dd_lvstore", 00:26:20.548 "lvol_name": "dd_lvol", 00:26:20.548 "size": 37748736, 00:26:20.548 "thin_provision": true 00:26:20.548 }, 00:26:20.548 "method": "bdev_lvol_create" 00:26:20.548 }, 00:26:20.548 { 00:26:20.548 "method": "bdev_wait_for_examine" 00:26:20.548 } 00:26:20.548 ] 00:26:20.548 } 00:26:20.548 ] 00:26:20.548 } 00:26:20.548 [2024-04-17 08:22:53.826809] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:20.806 [2024-04-17 08:22:53.930497] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:20.806 [2024-04-17 08:22:54.017848] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:26:20.806  Copying: 12/36 [MB] (average 521 MBps)[2024-04-17 08:22:54.058509] app.c: 883:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:26:21.063 00:26:21.063 00:26:21.063 00:26:21.063 real 0m0.658s 00:26:21.063 user 0m0.443s 00:26:21.063 sys 0m0.147s 00:26:21.063 ************************************ 00:26:21.063 END TEST dd_sparse_file_to_bdev 00:26:21.063 ************************************ 00:26:21.063 08:22:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:21.063 08:22:54 -- common/autotest_common.sh@10 -- # set +x 00:26:21.063 08:22:54 -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:26:21.063 08:22:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:21.063 08:22:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:21.063 08:22:54 -- common/autotest_common.sh@10 -- # set +x 00:26:21.063 ************************************ 00:26:21.063 START TEST dd_sparse_bdev_to_file 00:26:21.064 ************************************ 00:26:21.064 08:22:54 -- common/autotest_common.sh@1104 -- # bdev_to_file 00:26:21.064 08:22:54 -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:26:21.064 08:22:54 -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:26:21.064 08:22:54 -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:26:21.064 08:22:54 -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:26:21.064 08:22:54 -- dd/sparse.sh@91 -- # gen_conf 00:26:21.064 08:22:54 -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:26:21.064 08:22:54 -- dd/common.sh@31 -- # xtrace_disable 00:26:21.064 08:22:54 -- common/autotest_common.sh@10 -- # set +x 00:26:21.321 { 00:26:21.321 "subsystems": [ 00:26:21.321 { 00:26:21.321 "subsystem": "bdev", 00:26:21.321 "config": [ 00:26:21.321 { 00:26:21.321 "params": { 00:26:21.321 "block_size": 4096, 00:26:21.321 "filename": "dd_sparse_aio_disk", 00:26:21.321 "name": "dd_aio" 00:26:21.321 }, 00:26:21.321 "method": "bdev_aio_create" 00:26:21.321 }, 00:26:21.321 { 00:26:21.321 "method": "bdev_wait_for_examine" 00:26:21.321 } 00:26:21.321 ] 00:26:21.321 } 00:26:21.321 ] 00:26:21.321 } 00:26:21.321 [2024-04-17 08:22:54.424149] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:21.321 [2024-04-17 08:22:54.424221] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59688 ] 00:26:21.321 [2024-04-17 08:22:54.553180] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:21.579 [2024-04-17 08:22:54.655062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:21.837  Copying: 12/36 [MB] (average 1000 MBps) 00:26:21.837 00:26:21.837 08:22:55 -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:26:21.837 08:22:55 -- dd/sparse.sh@97 -- # stat2_s=37748736 00:26:21.837 08:22:55 -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:26:21.837 08:22:55 -- dd/sparse.sh@98 -- # stat3_s=37748736 00:26:21.837 08:22:55 -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:26:21.837 08:22:55 -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:26:21.837 08:22:55 -- dd/sparse.sh@102 -- # stat2_b=24576 00:26:21.837 08:22:55 -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:26:21.837 08:22:55 -- dd/sparse.sh@103 -- # stat3_b=24576 00:26:21.837 08:22:55 -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:26:21.837 00:26:21.837 real 0m0.663s 00:26:21.837 user 0m0.406s 00:26:21.837 sys 0m0.174s 00:26:21.837 ************************************ 00:26:21.837 END TEST dd_sparse_bdev_to_file 00:26:21.837 ************************************ 00:26:21.837 08:22:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:21.837 08:22:55 -- common/autotest_common.sh@10 -- # set +x 00:26:21.837 08:22:55 -- dd/sparse.sh@1 -- # cleanup 00:26:21.837 08:22:55 -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:26:21.837 08:22:55 -- dd/sparse.sh@12 -- # rm file_zero1 00:26:21.837 08:22:55 -- dd/sparse.sh@13 -- # rm file_zero2 00:26:21.837 08:22:55 -- dd/sparse.sh@14 -- # rm file_zero3 00:26:21.837 ************************************ 00:26:21.837 END TEST spdk_dd_sparse 00:26:21.837 ************************************ 00:26:21.837 00:26:21.837 real 0m2.328s 00:26:21.837 user 0m1.375s 00:26:21.837 sys 0m0.711s 00:26:21.837 08:22:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:21.837 08:22:55 -- common/autotest_common.sh@10 -- # set +x 00:26:21.837 08:22:55 -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:26:21.837 08:22:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:21.837 08:22:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:21.837 08:22:55 -- common/autotest_common.sh@10 -- # set +x 00:26:21.837 ************************************ 00:26:21.837 START TEST spdk_dd_negative 00:26:21.837 ************************************ 00:26:21.837 08:22:55 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:26:22.097 * Looking for test storage... 00:26:22.097 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:26:22.097 08:22:55 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:22.097 08:22:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:22.097 08:22:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:22.097 08:22:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:22.097 08:22:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:22.097 08:22:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:22.097 08:22:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:22.097 08:22:55 -- paths/export.sh@5 -- # export PATH 00:26:22.097 08:22:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:22.097 08:22:55 -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:26:22.097 08:22:55 -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:22.097 08:22:55 -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:26:22.097 08:22:55 -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:22.097 08:22:55 -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:26:22.097 08:22:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:22.097 08:22:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:22.097 08:22:55 -- common/autotest_common.sh@10 -- # set +x 00:26:22.097 ************************************ 00:26:22.097 START TEST dd_invalid_arguments 00:26:22.097 ************************************ 00:26:22.097 08:22:55 -- common/autotest_common.sh@1104 -- # invalid_arguments 00:26:22.097 08:22:55 -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:26:22.097 08:22:55 -- common/autotest_common.sh@640 -- # local es=0 00:26:22.097 08:22:55 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:26:22.097 08:22:55 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:22.097 08:22:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:22.097 08:22:55 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:22.097 08:22:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:22.097 08:22:55 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:22.097 08:22:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:22.097 08:22:55 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:22.097 08:22:55 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:26:22.097 08:22:55 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:26:22.097 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:26:22.097 options: 00:26:22.097 -c, --config JSON config file (default none) 00:26:22.097 --json JSON config file (default none) 00:26:22.097 --json-ignore-init-errors 00:26:22.097 don't exit on invalid config entry 00:26:22.097 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:26:22.097 -g, --single-file-segments 00:26:22.097 force creating just one hugetlbfs file 00:26:22.097 -h, --help show this usage 00:26:22.097 -i, --shm-id shared memory ID (optional) 00:26:22.098 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:26:22.098 --lcores lcore to CPU mapping list. The list is in the format: 00:26:22.098 [<,lcores[@CPUs]>...] 00:26:22.098 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:26:22.098 Within the group, '-' is used for range separator, 00:26:22.098 ',' is used for single number separator. 00:26:22.098 '( )' can be omitted for single element group, 00:26:22.098 '@' can be omitted if cpus and lcores have the same value 00:26:22.098 -n, --mem-channels channel number of memory channels used for DPDK 00:26:22.098 -p, --main-core main (primary) core for DPDK 00:26:22.098 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:26:22.098 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:26:22.098 --disable-cpumask-locks Disable CPU core lock files. 00:26:22.098 --silence-noticelog disable notice level logging to stderr 00:26:22.098 --msg-mempool-size global message memory pool size in count (default: 262143) 00:26:22.098 -u, --no-pci disable PCI access 00:26:22.098 --wait-for-rpc wait for RPCs to initialize subsystems 00:26:22.098 --max-delay maximum reactor delay (in microseconds) 00:26:22.098 -B, --pci-blocked pci addr to block (can be used more than once) 00:26:22.098 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:26:22.098 -R, --huge-unlink unlink huge files after initialization 00:26:22.098 -v, --version print SPDK version 00:26:22.098 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:26:22.098 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:26:22.098 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:26:22.098 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:26:22.098 Tracepoints vary in size and can use more than one trace entry. 00:26:22.098 --rpcs-allowed comma-separated list of permitted RPCS 00:26:22.098 --env-context Opaque context for use of the env implementation 00:26:22.098 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:26:22.098 --no-huge run without using hugepages 00:26:22.098 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, json_util, log, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, nvme_cuse, nvme_vfio, opal, reactor, rpc, rpc_client, scsi, sock, sock_posix, thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, vfu, vfu_virtio, vfu_virtio_blk, vfu_virtio_io, vfu_virtio_scsi, vfu_virtio_scsi_data, virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:26:22.098 -e, --tpoint-group [:] 00:26:22.098 group_name - tracepoint group name for spdk trace buffers (scsi, bdev, ftl, blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, all) 00:26:22.098 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:26:22.098 [2024-04-17 08:22:55.328298] spdk_dd.c:1460:main: *ERROR*: Invalid arguments 00:26:22.098 enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:26:22.098 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:26:22.098 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:26:22.098 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:26:22.098 [--------- DD Options ---------] 00:26:22.098 --if Input file. Must specify either --if or --ib. 00:26:22.098 --ib Input bdev. Must specifier either --if or --ib 00:26:22.098 --of Output file. Must specify either --of or --ob. 00:26:22.098 --ob Output bdev. Must specify either --of or --ob. 00:26:22.098 --iflag Input file flags. 00:26:22.098 --oflag Output file flags. 00:26:22.098 --bs I/O unit size (default: 4096) 00:26:22.098 --qd Queue depth (default: 2) 00:26:22.098 --count I/O unit count. The number of I/O units to copy. (default: all) 00:26:22.098 --skip Skip this many I/O units at start of input. (default: 0) 00:26:22.098 --seek Skip this many I/O units at start of output. (default: 0) 00:26:22.098 --aio Force usage of AIO. (by default io_uring is used if available) 00:26:22.098 --sparse Enable hole skipping in input target 00:26:22.098 Available iflag and oflag values: 00:26:22.098 append - append mode 00:26:22.098 direct - use direct I/O for data 00:26:22.098 directory - fail unless a directory 00:26:22.098 dsync - use synchronized I/O for data 00:26:22.098 noatime - do not update access time 00:26:22.098 noctty - do not assign controlling terminal from file 00:26:22.098 nofollow - do not follow symlinks 00:26:22.098 nonblock - use non-blocking I/O 00:26:22.098 sync - use synchronized I/O for data and metadata 00:26:22.098 08:22:55 -- common/autotest_common.sh@643 -- # es=2 00:26:22.098 08:22:55 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:22.098 08:22:55 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:26:22.098 08:22:55 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:22.098 00:26:22.098 real 0m0.058s 00:26:22.098 user 0m0.034s 00:26:22.098 sys 0m0.022s 00:26:22.098 08:22:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:22.098 08:22:55 -- common/autotest_common.sh@10 -- # set +x 00:26:22.098 ************************************ 00:26:22.098 END TEST dd_invalid_arguments 00:26:22.098 ************************************ 00:26:22.098 08:22:55 -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:26:22.098 08:22:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:22.098 08:22:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:22.098 08:22:55 -- common/autotest_common.sh@10 -- # set +x 00:26:22.098 ************************************ 00:26:22.098 START TEST dd_double_input 00:26:22.098 ************************************ 00:26:22.098 08:22:55 -- common/autotest_common.sh@1104 -- # double_input 00:26:22.098 08:22:55 -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:26:22.098 08:22:55 -- common/autotest_common.sh@640 -- # local es=0 00:26:22.098 08:22:55 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:26:22.098 08:22:55 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:22.098 08:22:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:22.098 08:22:55 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:22.098 08:22:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:22.098 08:22:55 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:22.098 08:22:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:22.098 08:22:55 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:22.098 08:22:55 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:26:22.098 08:22:55 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:26:22.356 [2024-04-17 08:22:55.456723] spdk_dd.c:1467:main: *ERROR*: You may specify either --if or --ib, but not both. 00:26:22.356 08:22:55 -- common/autotest_common.sh@643 -- # es=22 00:26:22.356 08:22:55 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:22.356 08:22:55 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:26:22.356 08:22:55 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:22.356 00:26:22.356 real 0m0.069s 00:26:22.356 user 0m0.045s 00:26:22.356 sys 0m0.022s 00:26:22.356 ************************************ 00:26:22.356 END TEST dd_double_input 00:26:22.356 ************************************ 00:26:22.356 08:22:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:22.356 08:22:55 -- common/autotest_common.sh@10 -- # set +x 00:26:22.356 08:22:55 -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:26:22.356 08:22:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:22.356 08:22:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:22.356 08:22:55 -- common/autotest_common.sh@10 -- # set +x 00:26:22.356 ************************************ 00:26:22.356 START TEST dd_double_output 00:26:22.356 ************************************ 00:26:22.356 08:22:55 -- common/autotest_common.sh@1104 -- # double_output 00:26:22.356 08:22:55 -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:26:22.356 08:22:55 -- common/autotest_common.sh@640 -- # local es=0 00:26:22.356 08:22:55 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:26:22.356 08:22:55 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:22.356 08:22:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:22.356 08:22:55 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:22.356 08:22:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:22.356 08:22:55 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:22.356 08:22:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:22.356 08:22:55 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:22.356 08:22:55 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:26:22.356 08:22:55 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:26:22.356 [2024-04-17 08:22:55.566927] spdk_dd.c:1473:main: *ERROR*: You may specify either --of or --ob, but not both. 00:26:22.356 08:22:55 -- common/autotest_common.sh@643 -- # es=22 00:26:22.356 08:22:55 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:22.356 ************************************ 00:26:22.356 END TEST dd_double_output 00:26:22.356 ************************************ 00:26:22.356 08:22:55 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:26:22.356 08:22:55 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:22.356 00:26:22.356 real 0m0.056s 00:26:22.356 user 0m0.038s 00:26:22.356 sys 0m0.017s 00:26:22.356 08:22:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:22.356 08:22:55 -- common/autotest_common.sh@10 -- # set +x 00:26:22.356 08:22:55 -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:26:22.356 08:22:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:22.356 08:22:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:22.356 08:22:55 -- common/autotest_common.sh@10 -- # set +x 00:26:22.356 ************************************ 00:26:22.356 START TEST dd_no_input 00:26:22.356 ************************************ 00:26:22.356 08:22:55 -- common/autotest_common.sh@1104 -- # no_input 00:26:22.356 08:22:55 -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:26:22.356 08:22:55 -- common/autotest_common.sh@640 -- # local es=0 00:26:22.356 08:22:55 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:26:22.356 08:22:55 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:22.356 08:22:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:22.356 08:22:55 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:22.356 08:22:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:22.356 08:22:55 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:22.356 08:22:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:22.356 08:22:55 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:22.356 08:22:55 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:26:22.357 08:22:55 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:26:22.615 [2024-04-17 08:22:55.699139] spdk_dd.c:1479:main: *ERROR*: You must specify either --if or --ib 00:26:22.615 08:22:55 -- common/autotest_common.sh@643 -- # es=22 00:26:22.615 08:22:55 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:22.615 08:22:55 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:26:22.615 08:22:55 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:22.615 ************************************ 00:26:22.615 END TEST dd_no_input 00:26:22.615 ************************************ 00:26:22.615 00:26:22.615 real 0m0.075s 00:26:22.615 user 0m0.043s 00:26:22.615 sys 0m0.030s 00:26:22.615 08:22:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:22.615 08:22:55 -- common/autotest_common.sh@10 -- # set +x 00:26:22.615 08:22:55 -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:26:22.615 08:22:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:22.615 08:22:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:22.615 08:22:55 -- common/autotest_common.sh@10 -- # set +x 00:26:22.615 ************************************ 00:26:22.615 START TEST dd_no_output 00:26:22.615 ************************************ 00:26:22.615 08:22:55 -- common/autotest_common.sh@1104 -- # no_output 00:26:22.615 08:22:55 -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:26:22.615 08:22:55 -- common/autotest_common.sh@640 -- # local es=0 00:26:22.615 08:22:55 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:26:22.615 08:22:55 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:22.615 08:22:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:22.615 08:22:55 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:22.615 08:22:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:22.615 08:22:55 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:22.615 08:22:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:22.615 08:22:55 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:22.615 08:22:55 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:26:22.615 08:22:55 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:26:22.615 [2024-04-17 08:22:55.810705] spdk_dd.c:1485:main: *ERROR*: You must specify either --of or --ob 00:26:22.615 ************************************ 00:26:22.615 END TEST dd_no_output 00:26:22.615 ************************************ 00:26:22.615 08:22:55 -- common/autotest_common.sh@643 -- # es=22 00:26:22.615 08:22:55 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:22.615 08:22:55 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:26:22.615 08:22:55 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:22.615 00:26:22.615 real 0m0.069s 00:26:22.615 user 0m0.034s 00:26:22.615 sys 0m0.034s 00:26:22.615 08:22:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:22.615 08:22:55 -- common/autotest_common.sh@10 -- # set +x 00:26:22.615 08:22:55 -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:26:22.615 08:22:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:22.615 08:22:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:22.615 08:22:55 -- common/autotest_common.sh@10 -- # set +x 00:26:22.615 ************************************ 00:26:22.615 START TEST dd_wrong_blocksize 00:26:22.615 ************************************ 00:26:22.615 08:22:55 -- common/autotest_common.sh@1104 -- # wrong_blocksize 00:26:22.615 08:22:55 -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:26:22.615 08:22:55 -- common/autotest_common.sh@640 -- # local es=0 00:26:22.615 08:22:55 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:26:22.615 08:22:55 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:22.615 08:22:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:22.615 08:22:55 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:22.616 08:22:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:22.616 08:22:55 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:22.616 08:22:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:22.616 08:22:55 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:22.616 08:22:55 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:26:22.616 08:22:55 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:26:22.616 [2024-04-17 08:22:55.929421] spdk_dd.c:1491:main: *ERROR*: Invalid --bs value 00:26:22.616 08:22:55 -- common/autotest_common.sh@643 -- # es=22 00:26:22.616 08:22:55 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:22.616 08:22:55 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:26:22.616 08:22:55 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:22.616 00:26:22.616 real 0m0.072s 00:26:22.616 user 0m0.041s 00:26:22.616 sys 0m0.030s 00:26:22.616 ************************************ 00:26:22.616 END TEST dd_wrong_blocksize 00:26:22.616 ************************************ 00:26:22.616 08:22:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:22.616 08:22:55 -- common/autotest_common.sh@10 -- # set +x 00:26:22.875 08:22:55 -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:26:22.875 08:22:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:22.875 08:22:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:22.875 08:22:55 -- common/autotest_common.sh@10 -- # set +x 00:26:22.875 ************************************ 00:26:22.875 START TEST dd_smaller_blocksize 00:26:22.875 ************************************ 00:26:22.875 08:22:56 -- common/autotest_common.sh@1104 -- # smaller_blocksize 00:26:22.875 08:22:56 -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:26:22.875 08:22:56 -- common/autotest_common.sh@640 -- # local es=0 00:26:22.875 08:22:56 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:26:22.875 08:22:56 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:22.875 08:22:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:22.875 08:22:56 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:22.875 08:22:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:22.875 08:22:56 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:22.875 08:22:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:22.875 08:22:56 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:22.875 08:22:56 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:26:22.875 08:22:56 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:26:22.875 [2024-04-17 08:22:56.063730] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:22.875 [2024-04-17 08:22:56.063794] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59898 ] 00:26:23.133 [2024-04-17 08:22:56.206420] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:23.133 [2024-04-17 08:22:56.302891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:23.391 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:26:23.391 [2024-04-17 08:22:56.612421] spdk_dd.c:1168:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:26:23.391 [2024-04-17 08:22:56.612521] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:23.391 [2024-04-17 08:22:56.708750] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:26:23.649 08:22:56 -- common/autotest_common.sh@643 -- # es=244 00:26:23.649 08:22:56 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:23.649 08:22:56 -- common/autotest_common.sh@652 -- # es=116 00:26:23.649 ************************************ 00:26:23.649 END TEST dd_smaller_blocksize 00:26:23.649 ************************************ 00:26:23.649 08:22:56 -- common/autotest_common.sh@653 -- # case "$es" in 00:26:23.649 08:22:56 -- common/autotest_common.sh@660 -- # es=1 00:26:23.649 08:22:56 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:23.649 00:26:23.649 real 0m0.821s 00:26:23.649 user 0m0.387s 00:26:23.649 sys 0m0.327s 00:26:23.649 08:22:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:23.649 08:22:56 -- common/autotest_common.sh@10 -- # set +x 00:26:23.649 08:22:56 -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:26:23.649 08:22:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:23.650 08:22:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:23.650 08:22:56 -- common/autotest_common.sh@10 -- # set +x 00:26:23.650 ************************************ 00:26:23.650 START TEST dd_invalid_count 00:26:23.650 ************************************ 00:26:23.650 08:22:56 -- common/autotest_common.sh@1104 -- # invalid_count 00:26:23.650 08:22:56 -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:26:23.650 08:22:56 -- common/autotest_common.sh@640 -- # local es=0 00:26:23.650 08:22:56 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:26:23.650 08:22:56 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:23.650 08:22:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:23.650 08:22:56 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:23.650 08:22:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:23.650 08:22:56 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:23.650 08:22:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:23.650 08:22:56 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:23.650 08:22:56 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:26:23.650 08:22:56 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:26:23.650 [2024-04-17 08:22:56.922345] spdk_dd.c:1497:main: *ERROR*: Invalid --count value 00:26:23.650 08:22:56 -- common/autotest_common.sh@643 -- # es=22 00:26:23.650 ************************************ 00:26:23.650 END TEST dd_invalid_count 00:26:23.650 ************************************ 00:26:23.650 08:22:56 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:23.650 08:22:56 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:26:23.650 08:22:56 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:23.650 00:26:23.650 real 0m0.053s 00:26:23.650 user 0m0.032s 00:26:23.650 sys 0m0.019s 00:26:23.650 08:22:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:23.650 08:22:56 -- common/autotest_common.sh@10 -- # set +x 00:26:23.909 08:22:56 -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:26:23.909 08:22:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:23.909 08:22:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:23.909 08:22:56 -- common/autotest_common.sh@10 -- # set +x 00:26:23.909 ************************************ 00:26:23.909 START TEST dd_invalid_oflag 00:26:23.909 ************************************ 00:26:23.909 08:22:56 -- common/autotest_common.sh@1104 -- # invalid_oflag 00:26:23.909 08:22:56 -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:26:23.909 08:22:56 -- common/autotest_common.sh@640 -- # local es=0 00:26:23.909 08:22:56 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:26:23.909 08:22:56 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:23.909 08:22:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:23.909 08:22:57 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:23.909 08:22:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:23.909 08:22:57 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:23.909 08:22:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:23.909 08:22:57 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:23.909 08:22:57 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:26:23.909 08:22:57 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:26:23.909 [2024-04-17 08:22:57.056664] spdk_dd.c:1503:main: *ERROR*: --oflags may be used only with --of 00:26:23.909 08:22:57 -- common/autotest_common.sh@643 -- # es=22 00:26:23.909 08:22:57 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:23.909 08:22:57 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:26:23.909 ************************************ 00:26:23.909 END TEST dd_invalid_oflag 00:26:23.909 ************************************ 00:26:23.909 08:22:57 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:23.909 00:26:23.909 real 0m0.071s 00:26:23.909 user 0m0.040s 00:26:23.909 sys 0m0.030s 00:26:23.909 08:22:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:23.909 08:22:57 -- common/autotest_common.sh@10 -- # set +x 00:26:23.909 08:22:57 -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:26:23.909 08:22:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:23.909 08:22:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:23.909 08:22:57 -- common/autotest_common.sh@10 -- # set +x 00:26:23.909 ************************************ 00:26:23.909 START TEST dd_invalid_iflag 00:26:23.909 ************************************ 00:26:23.909 08:22:57 -- common/autotest_common.sh@1104 -- # invalid_iflag 00:26:23.909 08:22:57 -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:26:23.909 08:22:57 -- common/autotest_common.sh@640 -- # local es=0 00:26:23.909 08:22:57 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:26:23.909 08:22:57 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:23.909 08:22:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:23.909 08:22:57 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:23.909 08:22:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:23.909 08:22:57 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:23.909 08:22:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:23.909 08:22:57 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:23.909 08:22:57 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:26:23.909 08:22:57 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:26:23.909 [2024-04-17 08:22:57.166536] spdk_dd.c:1509:main: *ERROR*: --iflags may be used only with --if 00:26:23.909 08:22:57 -- common/autotest_common.sh@643 -- # es=22 00:26:23.909 08:22:57 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:23.909 08:22:57 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:26:23.909 08:22:57 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:23.909 ************************************ 00:26:23.909 END TEST dd_invalid_iflag 00:26:23.909 ************************************ 00:26:23.909 00:26:23.909 real 0m0.068s 00:26:23.909 user 0m0.034s 00:26:23.909 sys 0m0.033s 00:26:23.909 08:22:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:23.909 08:22:57 -- common/autotest_common.sh@10 -- # set +x 00:26:23.909 08:22:57 -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:26:23.909 08:22:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:23.909 08:22:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:23.909 08:22:57 -- common/autotest_common.sh@10 -- # set +x 00:26:23.909 ************************************ 00:26:23.909 START TEST dd_unknown_flag 00:26:23.909 ************************************ 00:26:23.909 08:22:57 -- common/autotest_common.sh@1104 -- # unknown_flag 00:26:23.909 08:22:57 -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:26:23.909 08:22:57 -- common/autotest_common.sh@640 -- # local es=0 00:26:23.909 08:22:57 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:26:23.909 08:22:57 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:23.909 08:22:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:23.909 08:22:57 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:23.909 08:22:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:23.909 08:22:57 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:23.909 08:22:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:23.909 08:22:57 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:23.909 08:22:57 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:26:23.909 08:22:57 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:26:24.169 [2024-04-17 08:22:57.269938] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:24.169 [2024-04-17 08:22:57.270007] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60000 ] 00:26:24.169 [2024-04-17 08:22:57.409412] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:24.430 [2024-04-17 08:22:57.513589] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:24.430 [2024-04-17 08:22:57.583940] spdk_dd.c: 985:parse_flags: *ERROR*: Unknown file flag: -1 00:26:24.430 [2024-04-17 08:22:57.583996] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Not a directory 00:26:24.430 [2024-04-17 08:22:57.584004] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Not a directory 00:26:24.430 [2024-04-17 08:22:57.584012] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:24.430 [2024-04-17 08:22:57.678384] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:26:24.689 08:22:57 -- common/autotest_common.sh@643 -- # es=236 00:26:24.689 08:22:57 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:24.689 08:22:57 -- common/autotest_common.sh@652 -- # es=108 00:26:24.689 08:22:57 -- common/autotest_common.sh@653 -- # case "$es" in 00:26:24.689 08:22:57 -- common/autotest_common.sh@660 -- # es=1 00:26:24.689 08:22:57 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:24.689 00:26:24.689 real 0m0.565s 00:26:24.689 user 0m0.346s 00:26:24.689 sys 0m0.112s 00:26:24.689 ************************************ 00:26:24.689 END TEST dd_unknown_flag 00:26:24.689 ************************************ 00:26:24.689 08:22:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:24.689 08:22:57 -- common/autotest_common.sh@10 -- # set +x 00:26:24.689 08:22:57 -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:26:24.689 08:22:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:24.689 08:22:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:24.689 08:22:57 -- common/autotest_common.sh@10 -- # set +x 00:26:24.689 ************************************ 00:26:24.689 START TEST dd_invalid_json 00:26:24.689 ************************************ 00:26:24.689 08:22:57 -- common/autotest_common.sh@1104 -- # invalid_json 00:26:24.689 08:22:57 -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:26:24.689 08:22:57 -- common/autotest_common.sh@640 -- # local es=0 00:26:24.689 08:22:57 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:26:24.689 08:22:57 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:24.689 08:22:57 -- dd/negative_dd.sh@95 -- # : 00:26:24.690 08:22:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:24.690 08:22:57 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:24.690 08:22:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:24.690 08:22:57 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:24.690 08:22:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:24.690 08:22:57 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:24.690 08:22:57 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:26:24.690 08:22:57 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:26:24.690 [2024-04-17 08:22:57.894495] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:24.690 [2024-04-17 08:22:57.894562] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60023 ] 00:26:24.949 [2024-04-17 08:22:58.031163] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:24.949 [2024-04-17 08:22:58.136776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:24.949 [2024-04-17 08:22:58.136889] json_config.c: 529:app_json_config_read: *ERROR*: Parsing JSON configuration failed (-2) 00:26:24.949 [2024-04-17 08:22:58.136903] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:24.949 [2024-04-17 08:22:58.136935] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:26:24.949 08:22:58 -- common/autotest_common.sh@643 -- # es=234 00:26:24.949 08:22:58 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:24.949 08:22:58 -- common/autotest_common.sh@652 -- # es=106 00:26:24.949 08:22:58 -- common/autotest_common.sh@653 -- # case "$es" in 00:26:24.949 08:22:58 -- common/autotest_common.sh@660 -- # es=1 00:26:24.949 08:22:58 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:24.949 00:26:24.949 real 0m0.417s 00:26:24.949 user 0m0.248s 00:26:24.949 sys 0m0.066s 00:26:24.949 ************************************ 00:26:24.949 END TEST dd_invalid_json 00:26:24.949 ************************************ 00:26:24.949 08:22:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:24.949 08:22:58 -- common/autotest_common.sh@10 -- # set +x 00:26:25.208 ************************************ 00:26:25.208 END TEST spdk_dd_negative 00:26:25.208 ************************************ 00:26:25.208 00:26:25.208 real 0m3.134s 00:26:25.208 user 0m1.564s 00:26:25.208 sys 0m1.260s 00:26:25.208 08:22:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:25.208 08:22:58 -- common/autotest_common.sh@10 -- # set +x 00:26:25.208 ************************************ 00:26:25.208 END TEST spdk_dd 00:26:25.208 ************************************ 00:26:25.208 00:26:25.208 real 1m14.106s 00:26:25.208 user 0m47.083s 00:26:25.208 sys 0m18.001s 00:26:25.208 08:22:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:25.208 08:22:58 -- common/autotest_common.sh@10 -- # set +x 00:26:25.208 08:22:58 -- spdk/autotest.sh@217 -- # '[' 0 -eq 1 ']' 00:26:25.208 08:22:58 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:26:25.208 08:22:58 -- spdk/autotest.sh@268 -- # timing_exit lib 00:26:25.208 08:22:58 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:25.208 08:22:58 -- common/autotest_common.sh@10 -- # set +x 00:26:25.208 08:22:58 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:26:25.208 08:22:58 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:26:25.208 08:22:58 -- spdk/autotest.sh@287 -- # '[' 1 -eq 1 ']' 00:26:25.208 08:22:58 -- spdk/autotest.sh@288 -- # export NET_TYPE 00:26:25.208 08:22:58 -- spdk/autotest.sh@291 -- # '[' tcp = rdma ']' 00:26:25.208 08:22:58 -- spdk/autotest.sh@294 -- # '[' tcp = tcp ']' 00:26:25.208 08:22:58 -- spdk/autotest.sh@295 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:26:25.208 08:22:58 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:25.208 08:22:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:25.208 08:22:58 -- common/autotest_common.sh@10 -- # set +x 00:26:25.208 ************************************ 00:26:25.208 START TEST nvmf_tcp 00:26:25.208 ************************************ 00:26:25.208 08:22:58 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:26:25.208 * Looking for test storage... 00:26:25.468 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:26:25.468 08:22:58 -- nvmf/nvmf.sh@10 -- # uname -s 00:26:25.468 08:22:58 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:26:25.468 08:22:58 -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:25.468 08:22:58 -- nvmf/common.sh@7 -- # uname -s 00:26:25.468 08:22:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:25.468 08:22:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:25.468 08:22:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:25.468 08:22:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:25.468 08:22:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:25.468 08:22:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:25.468 08:22:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:25.468 08:22:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:25.468 08:22:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:25.468 08:22:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:25.468 08:22:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ce38300-f67f-48af-81f9-d51a7c54746d 00:26:25.468 08:22:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ce38300-f67f-48af-81f9-d51a7c54746d 00:26:25.468 08:22:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:25.468 08:22:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:25.468 08:22:58 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:25.468 08:22:58 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:25.468 08:22:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:25.468 08:22:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:25.468 08:22:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:25.468 08:22:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.468 08:22:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.468 08:22:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.468 08:22:58 -- paths/export.sh@5 -- # export PATH 00:26:25.468 08:22:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.468 08:22:58 -- nvmf/common.sh@46 -- # : 0 00:26:25.468 08:22:58 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:25.468 08:22:58 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:25.468 08:22:58 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:25.468 08:22:58 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:25.468 08:22:58 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:25.468 08:22:58 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:25.468 08:22:58 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:25.468 08:22:58 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:25.468 08:22:58 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:26:25.468 08:22:58 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:26:25.468 08:22:58 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:26:25.468 08:22:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:25.468 08:22:58 -- common/autotest_common.sh@10 -- # set +x 00:26:25.468 08:22:58 -- nvmf/nvmf.sh@22 -- # [[ 1 -eq 0 ]] 00:26:25.468 08:22:58 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:26:25.468 08:22:58 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:25.468 08:22:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:25.468 08:22:58 -- common/autotest_common.sh@10 -- # set +x 00:26:25.468 ************************************ 00:26:25.468 START TEST nvmf_host_management 00:26:25.468 ************************************ 00:26:25.468 08:22:58 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:26:25.468 * Looking for test storage... 00:26:25.468 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:25.468 08:22:58 -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:25.468 08:22:58 -- nvmf/common.sh@7 -- # uname -s 00:26:25.468 08:22:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:25.468 08:22:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:25.468 08:22:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:25.468 08:22:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:25.468 08:22:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:25.468 08:22:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:25.468 08:22:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:25.468 08:22:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:25.468 08:22:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:25.468 08:22:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:25.468 08:22:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ce38300-f67f-48af-81f9-d51a7c54746d 00:26:25.468 08:22:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ce38300-f67f-48af-81f9-d51a7c54746d 00:26:25.468 08:22:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:25.468 08:22:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:25.468 08:22:58 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:25.468 08:22:58 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:25.468 08:22:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:25.468 08:22:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:25.468 08:22:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:25.468 08:22:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.468 08:22:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.469 08:22:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.469 08:22:58 -- paths/export.sh@5 -- # export PATH 00:26:25.469 08:22:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.469 08:22:58 -- nvmf/common.sh@46 -- # : 0 00:26:25.469 08:22:58 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:25.469 08:22:58 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:25.469 08:22:58 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:25.469 08:22:58 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:25.469 08:22:58 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:25.469 08:22:58 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:25.469 08:22:58 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:25.469 08:22:58 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:25.469 08:22:58 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:25.469 08:22:58 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:25.469 08:22:58 -- target/host_management.sh@104 -- # nvmftestinit 00:26:25.469 08:22:58 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:25.469 08:22:58 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:25.469 08:22:58 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:25.469 08:22:58 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:25.469 08:22:58 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:25.469 08:22:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:25.469 08:22:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:25.469 08:22:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:25.469 08:22:58 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:26:25.469 08:22:58 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:26:25.469 08:22:58 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:26:25.469 08:22:58 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:26:25.469 08:22:58 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:26:25.469 08:22:58 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:26:25.469 08:22:58 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:25.469 08:22:58 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:25.469 08:22:58 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:25.469 08:22:58 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:26:25.469 08:22:58 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:25.469 08:22:58 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:25.469 08:22:58 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:25.469 08:22:58 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:25.469 08:22:58 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:25.469 08:22:58 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:25.469 08:22:58 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:25.469 08:22:58 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:25.469 08:22:58 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:26:25.469 Cannot find device "nvmf_init_br" 00:26:25.469 08:22:58 -- nvmf/common.sh@153 -- # true 00:26:25.469 08:22:58 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:26:25.469 Cannot find device "nvmf_tgt_br" 00:26:25.469 08:22:58 -- nvmf/common.sh@154 -- # true 00:26:25.469 08:22:58 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:26:25.469 Cannot find device "nvmf_tgt_br2" 00:26:25.469 08:22:58 -- nvmf/common.sh@155 -- # true 00:26:25.469 08:22:58 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:26:25.727 Cannot find device "nvmf_init_br" 00:26:25.727 08:22:58 -- nvmf/common.sh@156 -- # true 00:26:25.727 08:22:58 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:26:25.727 Cannot find device "nvmf_tgt_br" 00:26:25.727 08:22:58 -- nvmf/common.sh@157 -- # true 00:26:25.727 08:22:58 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:26:25.727 Cannot find device "nvmf_tgt_br2" 00:26:25.727 08:22:58 -- nvmf/common.sh@158 -- # true 00:26:25.727 08:22:58 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:26:25.727 Cannot find device "nvmf_br" 00:26:25.727 08:22:58 -- nvmf/common.sh@159 -- # true 00:26:25.727 08:22:58 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:26:25.727 Cannot find device "nvmf_init_if" 00:26:25.727 08:22:58 -- nvmf/common.sh@160 -- # true 00:26:25.727 08:22:58 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:25.727 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:25.727 08:22:58 -- nvmf/common.sh@161 -- # true 00:26:25.727 08:22:58 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:25.727 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:25.727 08:22:58 -- nvmf/common.sh@162 -- # true 00:26:25.727 08:22:58 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:26:25.727 08:22:58 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:25.727 08:22:58 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:25.727 08:22:58 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:25.727 08:22:58 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:25.727 08:22:58 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:25.727 08:22:58 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:25.727 08:22:58 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:25.727 08:22:58 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:25.727 08:22:58 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:26:25.727 08:22:58 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:26:25.727 08:22:58 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:26:25.727 08:22:58 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:26:25.727 08:22:58 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:25.727 08:22:58 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:25.727 08:22:58 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:25.727 08:22:58 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:26:25.727 08:22:59 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:26:25.727 08:22:59 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:26:25.986 08:22:59 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:25.986 08:22:59 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:25.986 08:22:59 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:25.986 08:22:59 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:25.986 08:22:59 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:26:25.986 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:25.986 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:26:25.986 00:26:25.986 --- 10.0.0.2 ping statistics --- 00:26:25.986 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:25.986 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:26:25.986 08:22:59 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:26:25.986 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:25.986 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.088 ms 00:26:25.986 00:26:25.986 --- 10.0.0.3 ping statistics --- 00:26:25.986 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:25.986 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:26:25.986 08:22:59 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:25.986 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:25.986 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:26:25.986 00:26:25.986 --- 10.0.0.1 ping statistics --- 00:26:25.986 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:25.986 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:26:25.986 08:22:59 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:25.986 08:22:59 -- nvmf/common.sh@421 -- # return 0 00:26:25.986 08:22:59 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:25.986 08:22:59 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:25.986 08:22:59 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:25.986 08:22:59 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:25.986 08:22:59 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:25.986 08:22:59 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:25.986 08:22:59 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:25.986 08:22:59 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:26:25.986 08:22:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:25.986 08:22:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:25.986 08:22:59 -- common/autotest_common.sh@10 -- # set +x 00:26:25.986 ************************************ 00:26:25.986 START TEST nvmf_host_management 00:26:25.986 ************************************ 00:26:25.986 08:22:59 -- common/autotest_common.sh@1104 -- # nvmf_host_management 00:26:25.986 08:22:59 -- target/host_management.sh@69 -- # starttarget 00:26:25.986 08:22:59 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:26:25.986 08:22:59 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:25.986 08:22:59 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:25.986 08:22:59 -- common/autotest_common.sh@10 -- # set +x 00:26:25.986 08:22:59 -- nvmf/common.sh@469 -- # nvmfpid=60282 00:26:25.986 08:22:59 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:25.986 08:22:59 -- nvmf/common.sh@470 -- # waitforlisten 60282 00:26:25.986 08:22:59 -- common/autotest_common.sh@819 -- # '[' -z 60282 ']' 00:26:25.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:25.986 08:22:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:25.986 08:22:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:25.986 08:22:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:25.986 08:22:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:25.986 08:22:59 -- common/autotest_common.sh@10 -- # set +x 00:26:25.986 [2024-04-17 08:22:59.251515] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:25.986 [2024-04-17 08:22:59.251592] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:26.245 [2024-04-17 08:22:59.394661] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:26.245 [2024-04-17 08:22:59.500864] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:26.245 [2024-04-17 08:22:59.501107] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:26.245 [2024-04-17 08:22:59.501150] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:26.245 [2024-04-17 08:22:59.501201] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:26.245 [2024-04-17 08:22:59.502196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:26.245 [2024-04-17 08:22:59.502301] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:26.246 [2024-04-17 08:22:59.502397] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:26.246 [2024-04-17 08:22:59.502401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:26:26.812 08:23:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:26.812 08:23:00 -- common/autotest_common.sh@852 -- # return 0 00:26:26.812 08:23:00 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:26.812 08:23:00 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:26.812 08:23:00 -- common/autotest_common.sh@10 -- # set +x 00:26:27.071 08:23:00 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:27.072 08:23:00 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:27.072 08:23:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:27.072 08:23:00 -- common/autotest_common.sh@10 -- # set +x 00:26:27.072 [2024-04-17 08:23:00.179235] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:27.072 08:23:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:27.072 08:23:00 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:26:27.072 08:23:00 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:27.072 08:23:00 -- common/autotest_common.sh@10 -- # set +x 00:26:27.072 08:23:00 -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:26:27.072 08:23:00 -- target/host_management.sh@23 -- # cat 00:26:27.072 08:23:00 -- target/host_management.sh@30 -- # rpc_cmd 00:26:27.072 08:23:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:27.072 08:23:00 -- common/autotest_common.sh@10 -- # set +x 00:26:27.072 Malloc0 00:26:27.072 [2024-04-17 08:23:00.255641] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:27.072 08:23:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:27.072 08:23:00 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:26:27.072 08:23:00 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:27.072 08:23:00 -- common/autotest_common.sh@10 -- # set +x 00:26:27.072 08:23:00 -- target/host_management.sh@73 -- # perfpid=60339 00:26:27.072 08:23:00 -- target/host_management.sh@74 -- # waitforlisten 60339 /var/tmp/bdevperf.sock 00:26:27.072 08:23:00 -- common/autotest_common.sh@819 -- # '[' -z 60339 ']' 00:26:27.072 08:23:00 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:26:27.072 08:23:00 -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:26:27.072 08:23:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:27.072 08:23:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:27.072 08:23:00 -- nvmf/common.sh@520 -- # config=() 00:26:27.072 08:23:00 -- nvmf/common.sh@520 -- # local subsystem config 00:26:27.072 08:23:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:27.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:27.072 08:23:00 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:27.072 08:23:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:27.072 08:23:00 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:27.072 { 00:26:27.072 "params": { 00:26:27.072 "name": "Nvme$subsystem", 00:26:27.072 "trtype": "$TEST_TRANSPORT", 00:26:27.072 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:27.072 "adrfam": "ipv4", 00:26:27.072 "trsvcid": "$NVMF_PORT", 00:26:27.072 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:27.072 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:27.072 "hdgst": ${hdgst:-false}, 00:26:27.072 "ddgst": ${ddgst:-false} 00:26:27.072 }, 00:26:27.072 "method": "bdev_nvme_attach_controller" 00:26:27.072 } 00:26:27.072 EOF 00:26:27.072 )") 00:26:27.072 08:23:00 -- common/autotest_common.sh@10 -- # set +x 00:26:27.072 08:23:00 -- nvmf/common.sh@542 -- # cat 00:26:27.072 08:23:00 -- nvmf/common.sh@544 -- # jq . 00:26:27.072 08:23:00 -- nvmf/common.sh@545 -- # IFS=, 00:26:27.072 08:23:00 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:27.072 "params": { 00:26:27.072 "name": "Nvme0", 00:26:27.072 "trtype": "tcp", 00:26:27.072 "traddr": "10.0.0.2", 00:26:27.072 "adrfam": "ipv4", 00:26:27.072 "trsvcid": "4420", 00:26:27.072 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:27.072 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:27.072 "hdgst": false, 00:26:27.072 "ddgst": false 00:26:27.072 }, 00:26:27.072 "method": "bdev_nvme_attach_controller" 00:26:27.072 }' 00:26:27.072 [2024-04-17 08:23:00.370763] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:27.072 [2024-04-17 08:23:00.370824] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60339 ] 00:26:27.331 [2024-04-17 08:23:00.493330] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:27.331 [2024-04-17 08:23:00.598704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:27.590 Running I/O for 10 seconds... 00:26:28.178 08:23:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:28.178 08:23:01 -- common/autotest_common.sh@852 -- # return 0 00:26:28.178 08:23:01 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:28.178 08:23:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:28.178 08:23:01 -- common/autotest_common.sh@10 -- # set +x 00:26:28.178 08:23:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:28.178 08:23:01 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:28.178 08:23:01 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:26:28.178 08:23:01 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:26:28.178 08:23:01 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:26:28.178 08:23:01 -- target/host_management.sh@52 -- # local ret=1 00:26:28.178 08:23:01 -- target/host_management.sh@53 -- # local i 00:26:28.178 08:23:01 -- target/host_management.sh@54 -- # (( i = 10 )) 00:26:28.178 08:23:01 -- target/host_management.sh@54 -- # (( i != 0 )) 00:26:28.178 08:23:01 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:26:28.178 08:23:01 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:26:28.178 08:23:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:28.178 08:23:01 -- common/autotest_common.sh@10 -- # set +x 00:26:28.178 08:23:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:28.178 08:23:01 -- target/host_management.sh@55 -- # read_io_count=1661 00:26:28.178 08:23:01 -- target/host_management.sh@58 -- # '[' 1661 -ge 100 ']' 00:26:28.178 08:23:01 -- target/host_management.sh@59 -- # ret=0 00:26:28.178 08:23:01 -- target/host_management.sh@60 -- # break 00:26:28.178 08:23:01 -- target/host_management.sh@64 -- # return 0 00:26:28.178 08:23:01 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:26:28.178 08:23:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:28.178 08:23:01 -- common/autotest_common.sh@10 -- # set +x 00:26:28.178 [2024-04-17 08:23:01.326755] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22024a0 is same with the state(5) to be set 00:26:28.178 [2024-04-17 08:23:01.326885] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22024a0 is same with the state(5) to be set 00:26:28.178 [2024-04-17 08:23:01.326937] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22024a0 is same with the state(5) to be set 00:26:28.178 [2024-04-17 08:23:01.326987] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22024a0 is same with the state(5) to be set 00:26:28.178 [2024-04-17 08:23:01.327025] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22024a0 is same with the state(5) to be set 00:26:28.178 [2024-04-17 08:23:01.327066] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22024a0 is same with the state(5) to be set 00:26:28.178 [2024-04-17 08:23:01.327111] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22024a0 is same with the state(5) to be set 00:26:28.178 [2024-04-17 08:23:01.327159] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22024a0 is same with the state(5) to be set 00:26:28.178 [2024-04-17 08:23:01.327196] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22024a0 is same with the state(5) to be set 00:26:28.178 [2024-04-17 08:23:01.327239] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22024a0 is same with the state(5) to be set 00:26:28.178 [2024-04-17 08:23:01.327286] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22024a0 is same with the state(5) to be set 00:26:28.178 [2024-04-17 08:23:01.327346] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22024a0 is same with the state(5) to be set 00:26:28.178 [2024-04-17 08:23:01.327384] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22024a0 is same with the state(5) to be set 00:26:28.178 [2024-04-17 08:23:01.327420] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22024a0 is same with the state(5) to be set 00:26:28.178 [2024-04-17 08:23:01.327468] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22024a0 is same with the state(5) to be set 00:26:28.178 [2024-04-17 08:23:01.327506] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22024a0 is same with the state(5) to be set 00:26:28.179 [2024-04-17 08:23:01.327542] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22024a0 is same with the state(5) to be set 00:26:28.179 [2024-04-17 08:23:01.327616] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22024a0 is same with the state(5) to be set 00:26:28.179 [2024-04-17 08:23:01.327665] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22024a0 is same with the state(5) to be set 00:26:28.179 [2024-04-17 08:23:01.327709] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22024a0 is same with the state(5) to be set 00:26:28.179 [2024-04-17 08:23:01.327746] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22024a0 is same with the state(5) to be set 00:26:28.179 [2024-04-17 08:23:01.327780] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22024a0 is same with the state(5) to be set 00:26:28.179 [2024-04-17 08:23:01.327826] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22024a0 is same with the state(5) to be set 00:26:28.179 [2024-04-17 08:23:01.327862] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22024a0 is same with the state(5) to be set 00:26:28.179 [2024-04-17 08:23:01.327896] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22024a0 is same with the state(5) to be set 00:26:28.179 [2024-04-17 08:23:01.327929] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22024a0 is same with the state(5) to be set 00:26:28.179 [2024-04-17 08:23:01.327974] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22024a0 is same with the state(5) to be set 00:26:28.179 [2024-04-17 08:23:01.328017] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22024a0 is same with the state(5) to be set 00:26:28.179 [2024-04-17 08:23:01.328054] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22024a0 is same with the state(5) to be set 00:26:28.179 [2024-04-17 08:23:01.328091] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22024a0 is same with the state(5) to be set 00:26:28.179 [2024-04-17 08:23:01.328128] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22024a0 is same with the state(5) to be set 00:26:28.179 [2024-04-17 08:23:01.328166] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22024a0 is same with the state(5) to be set 00:26:28.179 [2024-04-17 08:23:01.328203] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22024a0 is same with the state(5) to be set 00:26:28.179 [2024-04-17 08:23:01.328240] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22024a0 is same with the state(5) to be set 00:26:28.179 [2024-04-17 08:23:01.328290] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22024a0 is same with the state(5) to be set 00:26:28.179 [2024-04-17 08:23:01.328353] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22024a0 is same with the state(5) to be set 00:26:28.179 [2024-04-17 08:23:01.328362] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22024a0 is same with the state(5) to be set 00:26:28.179 [2024-04-17 08:23:01.328368] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22024a0 is same with the state(5) to be set 00:26:28.179 [2024-04-17 08:23:01.328374] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22024a0 is same with the state(5) to be set 00:26:28.179 [2024-04-17 08:23:01.328386] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22024a0 is same with the state(5) to be set 00:26:28.179 [2024-04-17 08:23:01.328392] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22024a0 is same with the state(5) to be set 00:26:28.179 [2024-04-17 08:23:01.328398] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22024a0 is same with the state(5) to be set 00:26:28.179 [2024-04-17 08:23:01.328403] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22024a0 is same with the state(5) to be set 00:26:28.179 [2024-04-17 08:23:01.328409] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22024a0 is same with the state(5) to be set 00:26:28.179 [2024-04-17 08:23:01.328414] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22024a0 is same with the state(5) to be set 00:26:28.179 [2024-04-17 08:23:01.328478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:95488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.179 [2024-04-17 08:23:01.328506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.179 [2024-04-17 08:23:01.328524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:95616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.179 [2024-04-17 08:23:01.328531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.179 [2024-04-17 08:23:01.328542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:95744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.179 [2024-04-17 08:23:01.328548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.179 [2024-04-17 08:23:01.328557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:95872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.179 [2024-04-17 08:23:01.328563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.179 [2024-04-17 08:23:01.328572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:96000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.179 [2024-04-17 08:23:01.328578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.179 [2024-04-17 08:23:01.328587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.179 [2024-04-17 08:23:01.328593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.179 [2024-04-17 08:23:01.328601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:96256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.179 [2024-04-17 08:23:01.328608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.179 [2024-04-17 08:23:01.328616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.179 [2024-04-17 08:23:01.328623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.179 [2024-04-17 08:23:01.328639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:96512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.179 [2024-04-17 08:23:01.328645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.179 [2024-04-17 08:23:01.328654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:96640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.179 [2024-04-17 08:23:01.328660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.179 [2024-04-17 08:23:01.328669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:96768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.179 [2024-04-17 08:23:01.328675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.179 [2024-04-17 08:23:01.328683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:96896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.179 [2024-04-17 08:23:01.328689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.179 [2024-04-17 08:23:01.328697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.179 [2024-04-17 08:23:01.328703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.179 [2024-04-17 08:23:01.328711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.179 [2024-04-17 08:23:01.328717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.179 [2024-04-17 08:23:01.328729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:97152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.179 [2024-04-17 08:23:01.328741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.179 [2024-04-17 08:23:01.328749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.179 [2024-04-17 08:23:01.328755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.179 [2024-04-17 08:23:01.328764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.179 [2024-04-17 08:23:01.328771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.179 [2024-04-17 08:23:01.328779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:97280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.179 [2024-04-17 08:23:01.328786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.179 [2024-04-17 08:23:01.328794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:97408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.179 [2024-04-17 08:23:01.328801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.179 [2024-04-17 08:23:01.328809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.179 [2024-04-17 08:23:01.328815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.179 [2024-04-17 08:23:01.328823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:97536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.179 [2024-04-17 08:23:01.328830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.179 [2024-04-17 08:23:01.328839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:97664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.179 [2024-04-17 08:23:01.328845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.179 [2024-04-17 08:23:01.328854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:97792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.179 [2024-04-17 08:23:01.328860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.179 [2024-04-17 08:23:01.328868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:97920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.179 [2024-04-17 08:23:01.328874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.179 [2024-04-17 08:23:01.328882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:90112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.179 [2024-04-17 08:23:01.328888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.179 [2024-04-17 08:23:01.328896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:98048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.179 [2024-04-17 08:23:01.328902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.180 [2024-04-17 08:23:01.328911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:98176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.180 [2024-04-17 08:23:01.328917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.180 [2024-04-17 08:23:01.328925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.180 [2024-04-17 08:23:01.328932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.180 [2024-04-17 08:23:01.328940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.180 [2024-04-17 08:23:01.328946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.180 [2024-04-17 08:23:01.328954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.180 [2024-04-17 08:23:01.328960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.180 [2024-04-17 08:23:01.328970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:90368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.180 [2024-04-17 08:23:01.328977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.180 [2024-04-17 08:23:01.328986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.180 [2024-04-17 08:23:01.328992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.180 [2024-04-17 08:23:01.329000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.180 [2024-04-17 08:23:01.329007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.180 [2024-04-17 08:23:01.329015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.180 [2024-04-17 08:23:01.329021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.180 [2024-04-17 08:23:01.329030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.180 [2024-04-17 08:23:01.329036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.180 [2024-04-17 08:23:01.329044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:90624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.180 [2024-04-17 08:23:01.329051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.180 [2024-04-17 08:23:01.329059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.180 [2024-04-17 08:23:01.329065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.180 [2024-04-17 08:23:01.329086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.180 [2024-04-17 08:23:01.329092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.180 [2024-04-17 08:23:01.329101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.180 [2024-04-17 08:23:01.329107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.180 [2024-04-17 08:23:01.329115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.180 [2024-04-17 08:23:01.329121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.180 [2024-04-17 08:23:01.329129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.180 [2024-04-17 08:23:01.329135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.180 [2024-04-17 08:23:01.329144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.180 [2024-04-17 08:23:01.329150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.180 [2024-04-17 08:23:01.329158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:90880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.180 [2024-04-17 08:23:01.329165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.180 [2024-04-17 08:23:01.329173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.180 [2024-04-17 08:23:01.329179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.180 [2024-04-17 08:23:01.329187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:91136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.180 [2024-04-17 08:23:01.329193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.180 [2024-04-17 08:23:01.329202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:91264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.180 [2024-04-17 08:23:01.329209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.180 [2024-04-17 08:23:01.329219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:91392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.180 [2024-04-17 08:23:01.329227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.180 [2024-04-17 08:23:01.329235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:91520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.180 [2024-04-17 08:23:01.329241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.180 [2024-04-17 08:23:01.329250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:91648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.180 [2024-04-17 08:23:01.329256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.180 [2024-04-17 08:23:01.329264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:91776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.180 [2024-04-17 08:23:01.329270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.180 [2024-04-17 08:23:01.329278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:92032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.180 [2024-04-17 08:23:01.329285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.180 [2024-04-17 08:23:01.329292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:92160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.180 [2024-04-17 08:23:01.329299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.180 [2024-04-17 08:23:01.329317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:92288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.180 [2024-04-17 08:23:01.329323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.180 [2024-04-17 08:23:01.329331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:92672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.180 [2024-04-17 08:23:01.329337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.180 [2024-04-17 08:23:01.329346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:92800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.180 [2024-04-17 08:23:01.329353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.180 [2024-04-17 08:23:01.329361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:93312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.180 [2024-04-17 08:23:01.329367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.180 [2024-04-17 08:23:01.329375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:93696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.180 [2024-04-17 08:23:01.329381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.180 [2024-04-17 08:23:01.329390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:93952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.180 [2024-04-17 08:23:01.329397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.180 [2024-04-17 08:23:01.329405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:94080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.180 [2024-04-17 08:23:01.329411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.180 [2024-04-17 08:23:01.329420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:94208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.180 [2024-04-17 08:23:01.329426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.180 [2024-04-17 08:23:01.329435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:94336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.180 [2024-04-17 08:23:01.329441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.180 [2024-04-17 08:23:01.329449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:94592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.180 [2024-04-17 08:23:01.329455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.180 [2024-04-17 08:23:01.329465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:94976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.180 [2024-04-17 08:23:01.329484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.180 [2024-04-17 08:23:01.329492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.180 [2024-04-17 08:23:01.329499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.180 [2024-04-17 08:23:01.329571] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x10fbba0 was disconnected and freed. reset controller. 00:26:28.180 [2024-04-17 08:23:01.330700] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:28.180 task offset: 95488 on job bdev=Nvme0n1 fails 00:26:28.180 00:26:28.180 Latency(us) 00:26:28.181 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:28.181 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:28.181 Job: Nvme0n1 ended in about 0.57 seconds with error 00:26:28.181 Verification LBA range: start 0x0 length 0x400 00:26:28.181 Nvme0n1 : 0.57 3038.74 189.92 111.64 0.00 20008.38 3834.86 26786.77 00:26:28.181 =================================================================================================================== 00:26:28.181 Total : 3038.74 189.92 111.64 0.00 20008.38 3834.86 26786.77 00:26:28.181 08:23:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:28.181 08:23:01 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:26:28.181 08:23:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:28.181 08:23:01 -- common/autotest_common.sh@10 -- # set +x 00:26:28.181 [2024-04-17 08:23:01.332976] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:28.181 [2024-04-17 08:23:01.333004] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10fb3d0 (9): Bad file descriptor 00:26:28.181 [2024-04-17 08:23:01.334068] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:26:28.181 [2024-04-17 08:23:01.334211] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:26:28.181 [2024-04-17 08:23:01.334287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.181 [2024-04-17 08:23:01.334358] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:26:28.181 [2024-04-17 08:23:01.334406] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:26:28.181 [2024-04-17 08:23:01.334448] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.181 [2024-04-17 08:23:01.334486] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10fb3d0 00:26:28.181 [2024-04-17 08:23:01.334534] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10fb3d0 (9): Bad file descriptor 00:26:28.181 [2024-04-17 08:23:01.334585] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:28.181 [2024-04-17 08:23:01.334626] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:28.181 [2024-04-17 08:23:01.334665] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:28.181 [2024-04-17 08:23:01.334698] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:28.181 08:23:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:28.181 08:23:01 -- target/host_management.sh@87 -- # sleep 1 00:26:29.116 08:23:02 -- target/host_management.sh@91 -- # kill -9 60339 00:26:29.117 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (60339) - No such process 00:26:29.117 08:23:02 -- target/host_management.sh@91 -- # true 00:26:29.117 08:23:02 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:26:29.117 08:23:02 -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:26:29.117 08:23:02 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:26:29.117 08:23:02 -- nvmf/common.sh@520 -- # config=() 00:26:29.117 08:23:02 -- nvmf/common.sh@520 -- # local subsystem config 00:26:29.117 08:23:02 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:29.117 08:23:02 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:29.117 { 00:26:29.117 "params": { 00:26:29.117 "name": "Nvme$subsystem", 00:26:29.117 "trtype": "$TEST_TRANSPORT", 00:26:29.117 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:29.117 "adrfam": "ipv4", 00:26:29.117 "trsvcid": "$NVMF_PORT", 00:26:29.117 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:29.117 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:29.117 "hdgst": ${hdgst:-false}, 00:26:29.117 "ddgst": ${ddgst:-false} 00:26:29.117 }, 00:26:29.117 "method": "bdev_nvme_attach_controller" 00:26:29.117 } 00:26:29.117 EOF 00:26:29.117 )") 00:26:29.117 08:23:02 -- nvmf/common.sh@542 -- # cat 00:26:29.117 08:23:02 -- nvmf/common.sh@544 -- # jq . 00:26:29.117 08:23:02 -- nvmf/common.sh@545 -- # IFS=, 00:26:29.117 08:23:02 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:29.117 "params": { 00:26:29.117 "name": "Nvme0", 00:26:29.117 "trtype": "tcp", 00:26:29.117 "traddr": "10.0.0.2", 00:26:29.117 "adrfam": "ipv4", 00:26:29.117 "trsvcid": "4420", 00:26:29.117 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:29.117 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:29.117 "hdgst": false, 00:26:29.117 "ddgst": false 00:26:29.117 }, 00:26:29.117 "method": "bdev_nvme_attach_controller" 00:26:29.117 }' 00:26:29.117 [2024-04-17 08:23:02.406477] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:29.117 [2024-04-17 08:23:02.406690] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60377 ] 00:26:29.375 [2024-04-17 08:23:02.547084] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:29.375 [2024-04-17 08:23:02.664377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:29.634 Running I/O for 1 seconds... 00:26:30.568 00:26:30.568 Latency(us) 00:26:30.568 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:30.568 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:30.568 Verification LBA range: start 0x0 length 0x400 00:26:30.568 Nvme0n1 : 1.01 3228.51 201.78 0.00 0.00 19546.96 790.58 24726.25 00:26:30.568 =================================================================================================================== 00:26:30.568 Total : 3228.51 201.78 0.00 0.00 19546.96 790.58 24726.25 00:26:30.826 08:23:04 -- target/host_management.sh@101 -- # stoptarget 00:26:30.826 08:23:04 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:26:30.826 08:23:04 -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:26:30.826 08:23:04 -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:26:30.826 08:23:04 -- target/host_management.sh@40 -- # nvmftestfini 00:26:30.826 08:23:04 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:30.826 08:23:04 -- nvmf/common.sh@116 -- # sync 00:26:31.084 08:23:04 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:31.084 08:23:04 -- nvmf/common.sh@119 -- # set +e 00:26:31.084 08:23:04 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:31.084 08:23:04 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:31.084 rmmod nvme_tcp 00:26:31.084 rmmod nvme_fabrics 00:26:31.084 rmmod nvme_keyring 00:26:31.084 08:23:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:31.084 08:23:04 -- nvmf/common.sh@123 -- # set -e 00:26:31.084 08:23:04 -- nvmf/common.sh@124 -- # return 0 00:26:31.084 08:23:04 -- nvmf/common.sh@477 -- # '[' -n 60282 ']' 00:26:31.084 08:23:04 -- nvmf/common.sh@478 -- # killprocess 60282 00:26:31.084 08:23:04 -- common/autotest_common.sh@926 -- # '[' -z 60282 ']' 00:26:31.084 08:23:04 -- common/autotest_common.sh@930 -- # kill -0 60282 00:26:31.084 08:23:04 -- common/autotest_common.sh@931 -- # uname 00:26:31.084 08:23:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:31.084 08:23:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 60282 00:26:31.084 killing process with pid 60282 00:26:31.084 08:23:04 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:26:31.084 08:23:04 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:26:31.084 08:23:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 60282' 00:26:31.084 08:23:04 -- common/autotest_common.sh@945 -- # kill 60282 00:26:31.084 08:23:04 -- common/autotest_common.sh@950 -- # wait 60282 00:26:31.344 [2024-04-17 08:23:04.471587] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:26:31.344 08:23:04 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:31.344 08:23:04 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:31.344 08:23:04 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:31.344 08:23:04 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:31.344 08:23:04 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:31.344 08:23:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:31.344 08:23:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:31.344 08:23:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:31.344 08:23:04 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:26:31.344 00:26:31.344 real 0m5.369s 00:26:31.344 user 0m22.421s 00:26:31.344 sys 0m1.152s 00:26:31.344 08:23:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:31.344 ************************************ 00:26:31.344 END TEST nvmf_host_management 00:26:31.344 ************************************ 00:26:31.344 08:23:04 -- common/autotest_common.sh@10 -- # set +x 00:26:31.344 08:23:04 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:26:31.344 ************************************ 00:26:31.344 END TEST nvmf_host_management 00:26:31.344 ************************************ 00:26:31.344 00:26:31.344 real 0m6.017s 00:26:31.344 user 0m22.590s 00:26:31.344 sys 0m1.452s 00:26:31.344 08:23:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:31.344 08:23:04 -- common/autotest_common.sh@10 -- # set +x 00:26:31.344 08:23:04 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:26:31.344 08:23:04 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:31.344 08:23:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:31.344 08:23:04 -- common/autotest_common.sh@10 -- # set +x 00:26:31.609 ************************************ 00:26:31.609 START TEST nvmf_lvol 00:26:31.609 ************************************ 00:26:31.609 08:23:04 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:26:31.609 * Looking for test storage... 00:26:31.609 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:31.609 08:23:04 -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:31.609 08:23:04 -- nvmf/common.sh@7 -- # uname -s 00:26:31.609 08:23:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:31.609 08:23:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:31.609 08:23:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:31.609 08:23:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:31.609 08:23:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:31.609 08:23:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:31.609 08:23:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:31.609 08:23:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:31.609 08:23:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:31.609 08:23:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:31.609 08:23:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ce38300-f67f-48af-81f9-d51a7c54746d 00:26:31.609 08:23:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ce38300-f67f-48af-81f9-d51a7c54746d 00:26:31.609 08:23:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:31.609 08:23:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:31.609 08:23:04 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:31.609 08:23:04 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:31.609 08:23:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:31.609 08:23:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:31.609 08:23:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:31.609 08:23:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.609 08:23:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.609 08:23:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.609 08:23:04 -- paths/export.sh@5 -- # export PATH 00:26:31.609 08:23:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.609 08:23:04 -- nvmf/common.sh@46 -- # : 0 00:26:31.609 08:23:04 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:31.609 08:23:04 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:31.609 08:23:04 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:31.609 08:23:04 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:31.609 08:23:04 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:31.609 08:23:04 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:31.609 08:23:04 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:31.610 08:23:04 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:31.610 08:23:04 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:31.610 08:23:04 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:31.610 08:23:04 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:26:31.610 08:23:04 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:26:31.610 08:23:04 -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:31.610 08:23:04 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:26:31.610 08:23:04 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:31.610 08:23:04 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:31.610 08:23:04 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:31.610 08:23:04 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:31.610 08:23:04 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:31.610 08:23:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:31.610 08:23:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:31.610 08:23:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:31.610 08:23:04 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:26:31.610 08:23:04 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:26:31.610 08:23:04 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:26:31.610 08:23:04 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:26:31.610 08:23:04 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:26:31.610 08:23:04 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:26:31.610 08:23:04 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:31.610 08:23:04 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:31.610 08:23:04 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:31.610 08:23:04 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:26:31.610 08:23:04 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:31.610 08:23:04 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:31.610 08:23:04 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:31.610 08:23:04 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:31.610 08:23:04 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:31.610 08:23:04 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:31.610 08:23:04 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:31.610 08:23:04 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:31.610 08:23:04 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:26:31.610 08:23:04 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:26:31.610 Cannot find device "nvmf_tgt_br" 00:26:31.610 08:23:04 -- nvmf/common.sh@154 -- # true 00:26:31.610 08:23:04 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:26:31.610 Cannot find device "nvmf_tgt_br2" 00:26:31.610 08:23:04 -- nvmf/common.sh@155 -- # true 00:26:31.610 08:23:04 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:26:31.610 08:23:04 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:26:31.610 Cannot find device "nvmf_tgt_br" 00:26:31.610 08:23:04 -- nvmf/common.sh@157 -- # true 00:26:31.610 08:23:04 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:26:31.610 Cannot find device "nvmf_tgt_br2" 00:26:31.610 08:23:04 -- nvmf/common.sh@158 -- # true 00:26:31.610 08:23:04 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:26:31.868 08:23:04 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:26:31.868 08:23:04 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:31.868 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:31.868 08:23:04 -- nvmf/common.sh@161 -- # true 00:26:31.868 08:23:04 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:31.868 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:31.868 08:23:05 -- nvmf/common.sh@162 -- # true 00:26:31.868 08:23:05 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:26:31.868 08:23:05 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:31.868 08:23:05 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:31.868 08:23:05 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:31.868 08:23:05 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:31.868 08:23:05 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:31.868 08:23:05 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:31.868 08:23:05 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:31.868 08:23:05 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:31.868 08:23:05 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:26:31.868 08:23:05 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:26:31.868 08:23:05 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:26:31.868 08:23:05 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:26:31.868 08:23:05 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:31.868 08:23:05 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:31.868 08:23:05 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:31.868 08:23:05 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:26:31.868 08:23:05 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:26:31.868 08:23:05 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:26:31.868 08:23:05 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:31.868 08:23:05 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:31.868 08:23:05 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:31.868 08:23:05 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:31.868 08:23:05 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:26:31.868 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:31.868 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.102 ms 00:26:31.868 00:26:31.868 --- 10.0.0.2 ping statistics --- 00:26:31.868 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:31.868 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:26:31.868 08:23:05 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:26:31.868 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:31.868 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:26:31.868 00:26:31.868 --- 10.0.0.3 ping statistics --- 00:26:31.868 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:31.868 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:26:31.868 08:23:05 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:31.868 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:31.868 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:26:31.868 00:26:31.868 --- 10.0.0.1 ping statistics --- 00:26:31.868 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:31.868 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:26:31.868 08:23:05 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:31.868 08:23:05 -- nvmf/common.sh@421 -- # return 0 00:26:31.868 08:23:05 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:31.868 08:23:05 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:31.868 08:23:05 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:31.868 08:23:05 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:31.868 08:23:05 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:31.868 08:23:05 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:31.868 08:23:05 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:31.868 08:23:05 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:26:31.868 08:23:05 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:31.868 08:23:05 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:31.868 08:23:05 -- common/autotest_common.sh@10 -- # set +x 00:26:31.868 08:23:05 -- nvmf/common.sh@469 -- # nvmfpid=60601 00:26:31.868 08:23:05 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:26:31.868 08:23:05 -- nvmf/common.sh@470 -- # waitforlisten 60601 00:26:31.868 08:23:05 -- common/autotest_common.sh@819 -- # '[' -z 60601 ']' 00:26:31.869 08:23:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:31.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:31.869 08:23:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:31.869 08:23:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:31.869 08:23:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:31.869 08:23:05 -- common/autotest_common.sh@10 -- # set +x 00:26:32.127 [2024-04-17 08:23:05.240535] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:32.127 [2024-04-17 08:23:05.240603] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:32.127 [2024-04-17 08:23:05.377975] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:32.385 [2024-04-17 08:23:05.481442] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:32.385 [2024-04-17 08:23:05.481697] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:32.385 [2024-04-17 08:23:05.481725] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:32.385 [2024-04-17 08:23:05.481771] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:32.385 [2024-04-17 08:23:05.481911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:32.385 [2024-04-17 08:23:05.482007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:32.385 [2024-04-17 08:23:05.482012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:32.952 08:23:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:32.952 08:23:06 -- common/autotest_common.sh@852 -- # return 0 00:26:32.952 08:23:06 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:32.952 08:23:06 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:32.952 08:23:06 -- common/autotest_common.sh@10 -- # set +x 00:26:32.952 08:23:06 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:32.952 08:23:06 -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:33.209 [2024-04-17 08:23:06.356745] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:33.209 08:23:06 -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:26:33.467 08:23:06 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:26:33.467 08:23:06 -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:26:33.726 08:23:06 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:26:33.726 08:23:06 -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:26:33.726 08:23:07 -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:26:33.984 08:23:07 -- target/nvmf_lvol.sh@29 -- # lvs=ebeee7f5-ba27-4313-be23-c0beb0026211 00:26:33.984 08:23:07 -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ebeee7f5-ba27-4313-be23-c0beb0026211 lvol 20 00:26:34.241 08:23:07 -- target/nvmf_lvol.sh@32 -- # lvol=e1ad929c-c706-4786-a407-9767f091750d 00:26:34.241 08:23:07 -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:26:34.500 08:23:07 -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e1ad929c-c706-4786-a407-9767f091750d 00:26:34.758 08:23:07 -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:35.024 [2024-04-17 08:23:08.116531] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:35.024 08:23:08 -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:35.024 08:23:08 -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:26:35.024 08:23:08 -- target/nvmf_lvol.sh@42 -- # perf_pid=60685 00:26:35.024 08:23:08 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:26:36.428 08:23:09 -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot e1ad929c-c706-4786-a407-9767f091750d MY_SNAPSHOT 00:26:36.428 08:23:09 -- target/nvmf_lvol.sh@47 -- # snapshot=b58e23a2-9310-4615-988a-744a3cf35bf2 00:26:36.428 08:23:09 -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize e1ad929c-c706-4786-a407-9767f091750d 30 00:26:36.686 08:23:09 -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone b58e23a2-9310-4615-988a-744a3cf35bf2 MY_CLONE 00:26:36.943 08:23:10 -- target/nvmf_lvol.sh@49 -- # clone=7e1ac6e3-493a-4143-a5a6-8c0df96bd593 00:26:36.943 08:23:10 -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 7e1ac6e3-493a-4143-a5a6-8c0df96bd593 00:26:37.201 08:23:10 -- target/nvmf_lvol.sh@53 -- # wait 60685 00:26:47.171 Initializing NVMe Controllers 00:26:47.171 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:26:47.171 Controller IO queue size 128, less than required. 00:26:47.172 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:47.172 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:26:47.172 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:26:47.172 Initialization complete. Launching workers. 00:26:47.172 ======================================================== 00:26:47.172 Latency(us) 00:26:47.172 Device Information : IOPS MiB/s Average min max 00:26:47.172 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 9869.14 38.55 12980.08 1739.89 67299.17 00:26:47.172 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 9967.64 38.94 12845.21 174.39 74043.87 00:26:47.172 ======================================================== 00:26:47.172 Total : 19836.79 77.49 12912.31 174.39 74043.87 00:26:47.172 00:26:47.172 08:23:18 -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:47.172 08:23:18 -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete e1ad929c-c706-4786-a407-9767f091750d 00:26:47.172 08:23:19 -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ebeee7f5-ba27-4313-be23-c0beb0026211 00:26:47.172 08:23:19 -- target/nvmf_lvol.sh@60 -- # rm -f 00:26:47.172 08:23:19 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:26:47.172 08:23:19 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:26:47.172 08:23:19 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:47.172 08:23:19 -- nvmf/common.sh@116 -- # sync 00:26:47.172 08:23:19 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:47.172 08:23:19 -- nvmf/common.sh@119 -- # set +e 00:26:47.172 08:23:19 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:47.172 08:23:19 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:47.172 rmmod nvme_tcp 00:26:47.172 rmmod nvme_fabrics 00:26:47.172 rmmod nvme_keyring 00:26:47.172 08:23:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:47.172 08:23:19 -- nvmf/common.sh@123 -- # set -e 00:26:47.172 08:23:19 -- nvmf/common.sh@124 -- # return 0 00:26:47.172 08:23:19 -- nvmf/common.sh@477 -- # '[' -n 60601 ']' 00:26:47.172 08:23:19 -- nvmf/common.sh@478 -- # killprocess 60601 00:26:47.172 08:23:19 -- common/autotest_common.sh@926 -- # '[' -z 60601 ']' 00:26:47.172 08:23:19 -- common/autotest_common.sh@930 -- # kill -0 60601 00:26:47.172 08:23:19 -- common/autotest_common.sh@931 -- # uname 00:26:47.172 08:23:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:47.172 08:23:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 60601 00:26:47.172 killing process with pid 60601 00:26:47.172 08:23:19 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:47.172 08:23:19 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:47.172 08:23:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 60601' 00:26:47.172 08:23:19 -- common/autotest_common.sh@945 -- # kill 60601 00:26:47.172 08:23:19 -- common/autotest_common.sh@950 -- # wait 60601 00:26:47.172 08:23:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:47.172 08:23:19 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:47.172 08:23:19 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:47.172 08:23:19 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:47.172 08:23:19 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:47.172 08:23:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:47.172 08:23:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:47.172 08:23:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:47.172 08:23:19 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:26:47.172 00:26:47.172 real 0m15.062s 00:26:47.172 user 1m2.682s 00:26:47.172 sys 0m4.052s 00:26:47.172 08:23:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:47.172 08:23:19 -- common/autotest_common.sh@10 -- # set +x 00:26:47.172 ************************************ 00:26:47.172 END TEST nvmf_lvol 00:26:47.172 ************************************ 00:26:47.172 08:23:19 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:26:47.172 08:23:19 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:47.172 08:23:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:47.172 08:23:19 -- common/autotest_common.sh@10 -- # set +x 00:26:47.172 ************************************ 00:26:47.172 START TEST nvmf_lvs_grow 00:26:47.172 ************************************ 00:26:47.172 08:23:19 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:26:47.172 * Looking for test storage... 00:26:47.172 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:47.172 08:23:19 -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:47.172 08:23:19 -- nvmf/common.sh@7 -- # uname -s 00:26:47.172 08:23:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:47.172 08:23:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:47.172 08:23:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:47.172 08:23:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:47.172 08:23:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:47.172 08:23:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:47.172 08:23:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:47.172 08:23:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:47.172 08:23:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:47.172 08:23:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:47.172 08:23:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ce38300-f67f-48af-81f9-d51a7c54746d 00:26:47.172 08:23:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ce38300-f67f-48af-81f9-d51a7c54746d 00:26:47.172 08:23:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:47.172 08:23:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:47.172 08:23:19 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:47.172 08:23:19 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:47.172 08:23:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:47.172 08:23:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:47.172 08:23:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:47.172 08:23:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.172 08:23:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.172 08:23:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.172 08:23:19 -- paths/export.sh@5 -- # export PATH 00:26:47.172 08:23:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.172 08:23:19 -- nvmf/common.sh@46 -- # : 0 00:26:47.172 08:23:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:47.172 08:23:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:47.172 08:23:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:47.172 08:23:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:47.172 08:23:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:47.172 08:23:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:47.172 08:23:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:47.172 08:23:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:47.172 08:23:19 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:47.172 08:23:19 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:47.172 08:23:19 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:26:47.172 08:23:19 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:47.172 08:23:19 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:47.172 08:23:19 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:47.172 08:23:19 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:47.172 08:23:19 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:47.172 08:23:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:47.172 08:23:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:47.172 08:23:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:47.172 08:23:19 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:26:47.172 08:23:19 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:26:47.172 08:23:19 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:26:47.172 08:23:19 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:26:47.172 08:23:19 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:26:47.172 08:23:19 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:26:47.172 08:23:19 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:47.172 08:23:19 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:47.172 08:23:19 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:47.172 08:23:19 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:26:47.172 08:23:19 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:47.172 08:23:19 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:47.172 08:23:19 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:47.172 08:23:19 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:47.172 08:23:19 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:47.172 08:23:19 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:47.173 08:23:19 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:47.173 08:23:19 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:47.173 08:23:19 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:26:47.173 08:23:19 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:26:47.173 Cannot find device "nvmf_tgt_br" 00:26:47.173 08:23:19 -- nvmf/common.sh@154 -- # true 00:26:47.173 08:23:19 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:26:47.173 Cannot find device "nvmf_tgt_br2" 00:26:47.173 08:23:20 -- nvmf/common.sh@155 -- # true 00:26:47.173 08:23:20 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:26:47.173 08:23:20 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:26:47.173 Cannot find device "nvmf_tgt_br" 00:26:47.173 08:23:20 -- nvmf/common.sh@157 -- # true 00:26:47.173 08:23:20 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:26:47.173 Cannot find device "nvmf_tgt_br2" 00:26:47.173 08:23:20 -- nvmf/common.sh@158 -- # true 00:26:47.173 08:23:20 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:26:47.173 08:23:20 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:26:47.173 08:23:20 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:47.173 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:47.173 08:23:20 -- nvmf/common.sh@161 -- # true 00:26:47.173 08:23:20 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:47.173 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:47.173 08:23:20 -- nvmf/common.sh@162 -- # true 00:26:47.173 08:23:20 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:26:47.173 08:23:20 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:47.173 08:23:20 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:47.173 08:23:20 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:47.173 08:23:20 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:47.173 08:23:20 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:47.173 08:23:20 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:47.173 08:23:20 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:47.173 08:23:20 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:47.173 08:23:20 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:26:47.173 08:23:20 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:26:47.173 08:23:20 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:26:47.173 08:23:20 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:26:47.173 08:23:20 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:47.173 08:23:20 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:47.173 08:23:20 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:47.173 08:23:20 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:26:47.173 08:23:20 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:26:47.173 08:23:20 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:26:47.173 08:23:20 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:47.173 08:23:20 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:47.173 08:23:20 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:47.173 08:23:20 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:47.173 08:23:20 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:26:47.173 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:47.173 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:26:47.173 00:26:47.173 --- 10.0.0.2 ping statistics --- 00:26:47.173 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:47.173 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:26:47.173 08:23:20 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:26:47.173 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:47.173 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:26:47.173 00:26:47.173 --- 10.0.0.3 ping statistics --- 00:26:47.173 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:47.173 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:26:47.173 08:23:20 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:47.173 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:47.173 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:26:47.173 00:26:47.173 --- 10.0.0.1 ping statistics --- 00:26:47.173 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:47.173 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:26:47.173 08:23:20 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:47.173 08:23:20 -- nvmf/common.sh@421 -- # return 0 00:26:47.173 08:23:20 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:47.173 08:23:20 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:47.173 08:23:20 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:47.173 08:23:20 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:47.173 08:23:20 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:47.173 08:23:20 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:47.173 08:23:20 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:47.173 08:23:20 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:26:47.173 08:23:20 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:47.173 08:23:20 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:47.173 08:23:20 -- common/autotest_common.sh@10 -- # set +x 00:26:47.173 08:23:20 -- nvmf/common.sh@469 -- # nvmfpid=60996 00:26:47.173 08:23:20 -- nvmf/common.sh@470 -- # waitforlisten 60996 00:26:47.173 08:23:20 -- common/autotest_common.sh@819 -- # '[' -z 60996 ']' 00:26:47.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:47.173 08:23:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:47.173 08:23:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:47.173 08:23:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:47.173 08:23:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:47.173 08:23:20 -- common/autotest_common.sh@10 -- # set +x 00:26:47.173 08:23:20 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:26:47.173 [2024-04-17 08:23:20.342957] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:47.173 [2024-04-17 08:23:20.343015] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:47.173 [2024-04-17 08:23:20.479927] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:47.432 [2024-04-17 08:23:20.583534] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:47.432 [2024-04-17 08:23:20.583670] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:47.432 [2024-04-17 08:23:20.583679] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:47.432 [2024-04-17 08:23:20.583685] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:47.432 [2024-04-17 08:23:20.583710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:47.999 08:23:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:47.999 08:23:21 -- common/autotest_common.sh@852 -- # return 0 00:26:47.999 08:23:21 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:47.999 08:23:21 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:47.999 08:23:21 -- common/autotest_common.sh@10 -- # set +x 00:26:47.999 08:23:21 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:47.999 08:23:21 -- target/nvmf_lvs_grow.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:48.258 [2024-04-17 08:23:21.433034] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:48.258 08:23:21 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:26:48.258 08:23:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:48.258 08:23:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:48.258 08:23:21 -- common/autotest_common.sh@10 -- # set +x 00:26:48.258 ************************************ 00:26:48.258 START TEST lvs_grow_clean 00:26:48.258 ************************************ 00:26:48.258 08:23:21 -- common/autotest_common.sh@1104 -- # lvs_grow 00:26:48.258 08:23:21 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:26:48.258 08:23:21 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:26:48.258 08:23:21 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:26:48.258 08:23:21 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:26:48.258 08:23:21 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:26:48.258 08:23:21 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:26:48.258 08:23:21 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:26:48.258 08:23:21 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:26:48.258 08:23:21 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:26:48.516 08:23:21 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:26:48.516 08:23:21 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:26:48.774 08:23:21 -- target/nvmf_lvs_grow.sh@28 -- # lvs=f7956862-8757-4603-a458-48a624787c91 00:26:48.774 08:23:21 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:26:48.774 08:23:21 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f7956862-8757-4603-a458-48a624787c91 00:26:49.033 08:23:22 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:26:49.033 08:23:22 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:26:49.033 08:23:22 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u f7956862-8757-4603-a458-48a624787c91 lvol 150 00:26:49.033 08:23:22 -- target/nvmf_lvs_grow.sh@33 -- # lvol=6bcbdeb9-c769-4b99-ad3b-a340af05f58c 00:26:49.033 08:23:22 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:26:49.033 08:23:22 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:26:49.293 [2024-04-17 08:23:22.557588] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:26:49.293 [2024-04-17 08:23:22.557667] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:26:49.293 true 00:26:49.293 08:23:22 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f7956862-8757-4603-a458-48a624787c91 00:26:49.293 08:23:22 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:26:49.551 08:23:22 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:26:49.551 08:23:22 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:26:49.810 08:23:22 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6bcbdeb9-c769-4b99-ad3b-a340af05f58c 00:26:50.068 08:23:23 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:50.068 [2024-04-17 08:23:23.364523] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:50.068 08:23:23 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:50.327 08:23:23 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:26:50.327 08:23:23 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=61079 00:26:50.327 08:23:23 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:50.327 08:23:23 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 61079 /var/tmp/bdevperf.sock 00:26:50.327 08:23:23 -- common/autotest_common.sh@819 -- # '[' -z 61079 ']' 00:26:50.327 08:23:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:50.327 08:23:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:50.327 08:23:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:50.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:50.327 08:23:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:50.327 08:23:23 -- common/autotest_common.sh@10 -- # set +x 00:26:50.327 [2024-04-17 08:23:23.628446] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:50.327 [2024-04-17 08:23:23.628607] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61079 ] 00:26:50.585 [2024-04-17 08:23:23.767149] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:50.585 [2024-04-17 08:23:23.874208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:51.519 08:23:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:51.519 08:23:24 -- common/autotest_common.sh@852 -- # return 0 00:26:51.519 08:23:24 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:26:51.519 Nvme0n1 00:26:51.519 08:23:24 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:26:51.779 [ 00:26:51.779 { 00:26:51.779 "name": "Nvme0n1", 00:26:51.779 "aliases": [ 00:26:51.779 "6bcbdeb9-c769-4b99-ad3b-a340af05f58c" 00:26:51.779 ], 00:26:51.779 "product_name": "NVMe disk", 00:26:51.779 "block_size": 4096, 00:26:51.779 "num_blocks": 38912, 00:26:51.779 "uuid": "6bcbdeb9-c769-4b99-ad3b-a340af05f58c", 00:26:51.779 "assigned_rate_limits": { 00:26:51.779 "rw_ios_per_sec": 0, 00:26:51.779 "rw_mbytes_per_sec": 0, 00:26:51.779 "r_mbytes_per_sec": 0, 00:26:51.779 "w_mbytes_per_sec": 0 00:26:51.779 }, 00:26:51.779 "claimed": false, 00:26:51.779 "zoned": false, 00:26:51.779 "supported_io_types": { 00:26:51.779 "read": true, 00:26:51.779 "write": true, 00:26:51.779 "unmap": true, 00:26:51.779 "write_zeroes": true, 00:26:51.779 "flush": true, 00:26:51.779 "reset": true, 00:26:51.779 "compare": true, 00:26:51.779 "compare_and_write": true, 00:26:51.779 "abort": true, 00:26:51.779 "nvme_admin": true, 00:26:51.779 "nvme_io": true 00:26:51.779 }, 00:26:51.779 "driver_specific": { 00:26:51.779 "nvme": [ 00:26:51.779 { 00:26:51.779 "trid": { 00:26:51.779 "trtype": "TCP", 00:26:51.779 "adrfam": "IPv4", 00:26:51.779 "traddr": "10.0.0.2", 00:26:51.779 "trsvcid": "4420", 00:26:51.779 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:26:51.779 }, 00:26:51.779 "ctrlr_data": { 00:26:51.779 "cntlid": 1, 00:26:51.779 "vendor_id": "0x8086", 00:26:51.779 "model_number": "SPDK bdev Controller", 00:26:51.779 "serial_number": "SPDK0", 00:26:51.779 "firmware_revision": "24.01.1", 00:26:51.779 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:51.779 "oacs": { 00:26:51.779 "security": 0, 00:26:51.779 "format": 0, 00:26:51.779 "firmware": 0, 00:26:51.779 "ns_manage": 0 00:26:51.779 }, 00:26:51.779 "multi_ctrlr": true, 00:26:51.779 "ana_reporting": false 00:26:51.779 }, 00:26:51.779 "vs": { 00:26:51.779 "nvme_version": "1.3" 00:26:51.779 }, 00:26:51.779 "ns_data": { 00:26:51.779 "id": 1, 00:26:51.779 "can_share": true 00:26:51.779 } 00:26:51.779 } 00:26:51.779 ], 00:26:51.779 "mp_policy": "active_passive" 00:26:51.779 } 00:26:51.779 } 00:26:51.779 ] 00:26:51.779 08:23:25 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=61103 00:26:51.779 08:23:25 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:26:51.779 08:23:25 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:52.038 Running I/O for 10 seconds... 00:26:52.975 Latency(us) 00:26:52.975 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:52.975 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:52.975 Nvme0n1 : 1.00 9398.00 36.71 0.00 0.00 0.00 0.00 0.00 00:26:52.975 =================================================================================================================== 00:26:52.975 Total : 9398.00 36.71 0.00 0.00 0.00 0.00 0.00 00:26:52.975 00:26:53.911 08:23:27 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u f7956862-8757-4603-a458-48a624787c91 00:26:53.911 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:53.911 Nvme0n1 : 2.00 9333.00 36.46 0.00 0.00 0.00 0.00 0.00 00:26:53.911 =================================================================================================================== 00:26:53.911 Total : 9333.00 36.46 0.00 0.00 0.00 0.00 0.00 00:26:53.911 00:26:54.170 true 00:26:54.170 08:23:27 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f7956862-8757-4603-a458-48a624787c91 00:26:54.170 08:23:27 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:26:54.428 08:23:27 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:26:54.428 08:23:27 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:26:54.428 08:23:27 -- target/nvmf_lvs_grow.sh@65 -- # wait 61103 00:26:54.996 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:54.996 Nvme0n1 : 3.00 9397.00 36.71 0.00 0.00 0.00 0.00 0.00 00:26:54.996 =================================================================================================================== 00:26:54.996 Total : 9397.00 36.71 0.00 0.00 0.00 0.00 0.00 00:26:54.996 00:26:55.947 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:55.947 Nvme0n1 : 4.00 9238.50 36.09 0.00 0.00 0.00 0.00 0.00 00:26:55.947 =================================================================================================================== 00:26:55.947 Total : 9238.50 36.09 0.00 0.00 0.00 0.00 0.00 00:26:55.947 00:26:56.903 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:56.903 Nvme0n1 : 5.00 9270.40 36.21 0.00 0.00 0.00 0.00 0.00 00:26:56.903 =================================================================================================================== 00:26:56.903 Total : 9270.40 36.21 0.00 0.00 0.00 0.00 0.00 00:26:56.903 00:26:57.841 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:57.841 Nvme0n1 : 6.00 9270.50 36.21 0.00 0.00 0.00 0.00 0.00 00:26:57.841 =================================================================================================================== 00:26:57.841 Total : 9270.50 36.21 0.00 0.00 0.00 0.00 0.00 00:26:57.841 00:26:59.219 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:59.219 Nvme0n1 : 7.00 9252.43 36.14 0.00 0.00 0.00 0.00 0.00 00:26:59.219 =================================================================================================================== 00:26:59.219 Total : 9252.43 36.14 0.00 0.00 0.00 0.00 0.00 00:26:59.219 00:27:00.156 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:00.156 Nvme0n1 : 8.00 9270.62 36.21 0.00 0.00 0.00 0.00 0.00 00:27:00.156 =================================================================================================================== 00:27:00.156 Total : 9270.62 36.21 0.00 0.00 0.00 0.00 0.00 00:27:00.156 00:27:01.113 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:01.113 Nvme0n1 : 9.00 9298.11 36.32 0.00 0.00 0.00 0.00 0.00 00:27:01.113 =================================================================================================================== 00:27:01.113 Total : 9298.11 36.32 0.00 0.00 0.00 0.00 0.00 00:27:01.113 00:27:02.065 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:02.065 Nvme0n1 : 10.00 9295.40 36.31 0.00 0.00 0.00 0.00 0.00 00:27:02.065 =================================================================================================================== 00:27:02.065 Total : 9295.40 36.31 0.00 0.00 0.00 0.00 0.00 00:27:02.065 00:27:02.065 00:27:02.065 Latency(us) 00:27:02.065 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:02.065 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:02.065 Nvme0n1 : 10.01 9296.49 36.31 0.00 0.00 13765.12 10817.73 34799.90 00:27:02.065 =================================================================================================================== 00:27:02.065 Total : 9296.49 36.31 0.00 0.00 13765.12 10817.73 34799.90 00:27:02.065 0 00:27:02.065 08:23:35 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 61079 00:27:02.065 08:23:35 -- common/autotest_common.sh@926 -- # '[' -z 61079 ']' 00:27:02.065 08:23:35 -- common/autotest_common.sh@930 -- # kill -0 61079 00:27:02.065 08:23:35 -- common/autotest_common.sh@931 -- # uname 00:27:02.065 08:23:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:02.065 08:23:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 61079 00:27:02.065 killing process with pid 61079 00:27:02.065 Received shutdown signal, test time was about 10.000000 seconds 00:27:02.065 00:27:02.065 Latency(us) 00:27:02.065 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:02.065 =================================================================================================================== 00:27:02.065 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:02.065 08:23:35 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:27:02.065 08:23:35 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:27:02.065 08:23:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 61079' 00:27:02.065 08:23:35 -- common/autotest_common.sh@945 -- # kill 61079 00:27:02.065 08:23:35 -- common/autotest_common.sh@950 -- # wait 61079 00:27:02.324 08:23:35 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:02.325 08:23:35 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f7956862-8757-4603-a458-48a624787c91 00:27:02.325 08:23:35 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:27:02.583 08:23:35 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:27:02.583 08:23:35 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:27:02.583 08:23:35 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:27:02.843 [2024-04-17 08:23:35.983041] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:27:02.843 08:23:36 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f7956862-8757-4603-a458-48a624787c91 00:27:02.843 08:23:36 -- common/autotest_common.sh@640 -- # local es=0 00:27:02.843 08:23:36 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f7956862-8757-4603-a458-48a624787c91 00:27:02.843 08:23:36 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:02.843 08:23:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:02.843 08:23:36 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:02.843 08:23:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:02.843 08:23:36 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:02.843 08:23:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:02.843 08:23:36 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:02.843 08:23:36 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:27:02.843 08:23:36 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f7956862-8757-4603-a458-48a624787c91 00:27:03.102 request: 00:27:03.102 { 00:27:03.102 "uuid": "f7956862-8757-4603-a458-48a624787c91", 00:27:03.102 "method": "bdev_lvol_get_lvstores", 00:27:03.102 "req_id": 1 00:27:03.102 } 00:27:03.102 Got JSON-RPC error response 00:27:03.102 response: 00:27:03.102 { 00:27:03.102 "code": -19, 00:27:03.102 "message": "No such device" 00:27:03.102 } 00:27:03.102 08:23:36 -- common/autotest_common.sh@643 -- # es=1 00:27:03.102 08:23:36 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:03.102 08:23:36 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:27:03.102 08:23:36 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:03.102 08:23:36 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:27:03.102 aio_bdev 00:27:03.102 08:23:36 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 6bcbdeb9-c769-4b99-ad3b-a340af05f58c 00:27:03.102 08:23:36 -- common/autotest_common.sh@887 -- # local bdev_name=6bcbdeb9-c769-4b99-ad3b-a340af05f58c 00:27:03.102 08:23:36 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:27:03.102 08:23:36 -- common/autotest_common.sh@889 -- # local i 00:27:03.102 08:23:36 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:27:03.102 08:23:36 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:27:03.102 08:23:36 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:27:03.361 08:23:36 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6bcbdeb9-c769-4b99-ad3b-a340af05f58c -t 2000 00:27:03.620 [ 00:27:03.620 { 00:27:03.620 "name": "6bcbdeb9-c769-4b99-ad3b-a340af05f58c", 00:27:03.620 "aliases": [ 00:27:03.620 "lvs/lvol" 00:27:03.620 ], 00:27:03.620 "product_name": "Logical Volume", 00:27:03.620 "block_size": 4096, 00:27:03.620 "num_blocks": 38912, 00:27:03.620 "uuid": "6bcbdeb9-c769-4b99-ad3b-a340af05f58c", 00:27:03.620 "assigned_rate_limits": { 00:27:03.620 "rw_ios_per_sec": 0, 00:27:03.620 "rw_mbytes_per_sec": 0, 00:27:03.620 "r_mbytes_per_sec": 0, 00:27:03.620 "w_mbytes_per_sec": 0 00:27:03.620 }, 00:27:03.620 "claimed": false, 00:27:03.620 "zoned": false, 00:27:03.620 "supported_io_types": { 00:27:03.620 "read": true, 00:27:03.620 "write": true, 00:27:03.620 "unmap": true, 00:27:03.620 "write_zeroes": true, 00:27:03.620 "flush": false, 00:27:03.620 "reset": true, 00:27:03.620 "compare": false, 00:27:03.620 "compare_and_write": false, 00:27:03.620 "abort": false, 00:27:03.620 "nvme_admin": false, 00:27:03.620 "nvme_io": false 00:27:03.620 }, 00:27:03.620 "driver_specific": { 00:27:03.620 "lvol": { 00:27:03.620 "lvol_store_uuid": "f7956862-8757-4603-a458-48a624787c91", 00:27:03.620 "base_bdev": "aio_bdev", 00:27:03.620 "thin_provision": false, 00:27:03.620 "snapshot": false, 00:27:03.620 "clone": false, 00:27:03.620 "esnap_clone": false 00:27:03.620 } 00:27:03.620 } 00:27:03.620 } 00:27:03.620 ] 00:27:03.620 08:23:36 -- common/autotest_common.sh@895 -- # return 0 00:27:03.620 08:23:36 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f7956862-8757-4603-a458-48a624787c91 00:27:03.620 08:23:36 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:27:03.878 08:23:36 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:27:03.879 08:23:36 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f7956862-8757-4603-a458-48a624787c91 00:27:03.879 08:23:36 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:27:03.879 08:23:37 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:27:03.879 08:23:37 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 6bcbdeb9-c769-4b99-ad3b-a340af05f58c 00:27:04.161 08:23:37 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f7956862-8757-4603-a458-48a624787c91 00:27:04.426 08:23:37 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:27:04.426 08:23:37 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:27:04.684 ************************************ 00:27:04.684 END TEST lvs_grow_clean 00:27:04.684 ************************************ 00:27:04.684 00:27:04.684 real 0m16.517s 00:27:04.684 user 0m15.579s 00:27:04.684 sys 0m2.157s 00:27:04.684 08:23:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:04.684 08:23:37 -- common/autotest_common.sh@10 -- # set +x 00:27:04.943 08:23:38 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:27:04.943 08:23:38 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:04.943 08:23:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:04.943 08:23:38 -- common/autotest_common.sh@10 -- # set +x 00:27:04.943 ************************************ 00:27:04.943 START TEST lvs_grow_dirty 00:27:04.943 ************************************ 00:27:04.943 08:23:38 -- common/autotest_common.sh@1104 -- # lvs_grow dirty 00:27:04.943 08:23:38 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:27:04.943 08:23:38 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:27:04.943 08:23:38 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:27:04.943 08:23:38 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:27:04.943 08:23:38 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:27:04.943 08:23:38 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:27:04.943 08:23:38 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:27:04.943 08:23:38 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:27:04.943 08:23:38 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:27:04.943 08:23:38 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:27:04.943 08:23:38 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:27:05.202 08:23:38 -- target/nvmf_lvs_grow.sh@28 -- # lvs=684fd628-2c2a-4e9f-92c0-4948881ec2f0 00:27:05.202 08:23:38 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 684fd628-2c2a-4e9f-92c0-4948881ec2f0 00:27:05.202 08:23:38 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:27:05.461 08:23:38 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:27:05.461 08:23:38 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:27:05.461 08:23:38 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 684fd628-2c2a-4e9f-92c0-4948881ec2f0 lvol 150 00:27:05.720 08:23:38 -- target/nvmf_lvs_grow.sh@33 -- # lvol=c16b29cc-081e-4000-9864-20cc512a5361 00:27:05.720 08:23:38 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:27:05.720 08:23:38 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:27:05.720 [2024-04-17 08:23:39.039362] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:27:05.720 [2024-04-17 08:23:39.039436] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:27:05.720 true 00:27:05.978 08:23:39 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 684fd628-2c2a-4e9f-92c0-4948881ec2f0 00:27:05.978 08:23:39 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:27:05.978 08:23:39 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:27:05.978 08:23:39 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:27:06.238 08:23:39 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c16b29cc-081e-4000-9864-20cc512a5361 00:27:06.498 08:23:39 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:06.757 08:23:39 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:06.757 08:23:40 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:27:06.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:06.757 08:23:40 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=61325 00:27:06.757 08:23:40 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:06.757 08:23:40 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 61325 /var/tmp/bdevperf.sock 00:27:06.757 08:23:40 -- common/autotest_common.sh@819 -- # '[' -z 61325 ']' 00:27:06.757 08:23:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:06.757 08:23:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:06.757 08:23:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:06.757 08:23:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:06.757 08:23:40 -- common/autotest_common.sh@10 -- # set +x 00:27:06.757 [2024-04-17 08:23:40.067989] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:27:06.757 [2024-04-17 08:23:40.068154] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61325 ] 00:27:07.016 [2024-04-17 08:23:40.206665] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:07.016 [2024-04-17 08:23:40.306967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:07.586 08:23:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:07.586 08:23:40 -- common/autotest_common.sh@852 -- # return 0 00:27:07.586 08:23:40 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:27:08.153 Nvme0n1 00:27:08.153 08:23:41 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:27:08.153 [ 00:27:08.153 { 00:27:08.153 "name": "Nvme0n1", 00:27:08.153 "aliases": [ 00:27:08.153 "c16b29cc-081e-4000-9864-20cc512a5361" 00:27:08.153 ], 00:27:08.153 "product_name": "NVMe disk", 00:27:08.153 "block_size": 4096, 00:27:08.153 "num_blocks": 38912, 00:27:08.153 "uuid": "c16b29cc-081e-4000-9864-20cc512a5361", 00:27:08.153 "assigned_rate_limits": { 00:27:08.153 "rw_ios_per_sec": 0, 00:27:08.153 "rw_mbytes_per_sec": 0, 00:27:08.153 "r_mbytes_per_sec": 0, 00:27:08.153 "w_mbytes_per_sec": 0 00:27:08.153 }, 00:27:08.153 "claimed": false, 00:27:08.153 "zoned": false, 00:27:08.153 "supported_io_types": { 00:27:08.153 "read": true, 00:27:08.153 "write": true, 00:27:08.153 "unmap": true, 00:27:08.153 "write_zeroes": true, 00:27:08.153 "flush": true, 00:27:08.153 "reset": true, 00:27:08.153 "compare": true, 00:27:08.153 "compare_and_write": true, 00:27:08.153 "abort": true, 00:27:08.153 "nvme_admin": true, 00:27:08.153 "nvme_io": true 00:27:08.153 }, 00:27:08.153 "driver_specific": { 00:27:08.153 "nvme": [ 00:27:08.153 { 00:27:08.153 "trid": { 00:27:08.153 "trtype": "TCP", 00:27:08.153 "adrfam": "IPv4", 00:27:08.153 "traddr": "10.0.0.2", 00:27:08.153 "trsvcid": "4420", 00:27:08.153 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:08.153 }, 00:27:08.153 "ctrlr_data": { 00:27:08.153 "cntlid": 1, 00:27:08.153 "vendor_id": "0x8086", 00:27:08.153 "model_number": "SPDK bdev Controller", 00:27:08.153 "serial_number": "SPDK0", 00:27:08.153 "firmware_revision": "24.01.1", 00:27:08.153 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:08.153 "oacs": { 00:27:08.153 "security": 0, 00:27:08.153 "format": 0, 00:27:08.153 "firmware": 0, 00:27:08.153 "ns_manage": 0 00:27:08.153 }, 00:27:08.153 "multi_ctrlr": true, 00:27:08.153 "ana_reporting": false 00:27:08.153 }, 00:27:08.153 "vs": { 00:27:08.153 "nvme_version": "1.3" 00:27:08.153 }, 00:27:08.153 "ns_data": { 00:27:08.153 "id": 1, 00:27:08.153 "can_share": true 00:27:08.153 } 00:27:08.153 } 00:27:08.153 ], 00:27:08.153 "mp_policy": "active_passive" 00:27:08.153 } 00:27:08.153 } 00:27:08.153 ] 00:27:08.153 08:23:41 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=61354 00:27:08.153 08:23:41 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:27:08.153 08:23:41 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:08.412 Running I/O for 10 seconds... 00:27:09.351 Latency(us) 00:27:09.351 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:09.351 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:09.351 Nvme0n1 : 1.00 9779.00 38.20 0.00 0.00 0.00 0.00 0.00 00:27:09.351 =================================================================================================================== 00:27:09.351 Total : 9779.00 38.20 0.00 0.00 0.00 0.00 0.00 00:27:09.351 00:27:10.288 08:23:43 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 684fd628-2c2a-4e9f-92c0-4948881ec2f0 00:27:10.288 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:10.288 Nvme0n1 : 2.00 9715.50 37.95 0.00 0.00 0.00 0.00 0.00 00:27:10.288 =================================================================================================================== 00:27:10.288 Total : 9715.50 37.95 0.00 0.00 0.00 0.00 0.00 00:27:10.288 00:27:10.546 true 00:27:10.546 08:23:43 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:27:10.546 08:23:43 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 684fd628-2c2a-4e9f-92c0-4948881ec2f0 00:27:10.804 08:23:43 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:27:10.804 08:23:43 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:27:10.804 08:23:43 -- target/nvmf_lvs_grow.sh@65 -- # wait 61354 00:27:11.389 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:11.389 Nvme0n1 : 3.00 9652.00 37.70 0.00 0.00 0.00 0.00 0.00 00:27:11.389 =================================================================================================================== 00:27:11.389 Total : 9652.00 37.70 0.00 0.00 0.00 0.00 0.00 00:27:11.389 00:27:12.351 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:12.351 Nvme0n1 : 4.00 9588.50 37.46 0.00 0.00 0.00 0.00 0.00 00:27:12.351 =================================================================================================================== 00:27:12.351 Total : 9588.50 37.46 0.00 0.00 0.00 0.00 0.00 00:27:12.351 00:27:13.288 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:13.288 Nvme0n1 : 5.00 9474.20 37.01 0.00 0.00 0.00 0.00 0.00 00:27:13.288 =================================================================================================================== 00:27:13.288 Total : 9474.20 37.01 0.00 0.00 0.00 0.00 0.00 00:27:13.288 00:27:14.222 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:14.222 Nvme0n1 : 6.00 9398.00 36.71 0.00 0.00 0.00 0.00 0.00 00:27:14.222 =================================================================================================================== 00:27:14.222 Total : 9398.00 36.71 0.00 0.00 0.00 0.00 0.00 00:27:14.222 00:27:15.621 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:15.621 Nvme0n1 : 7.00 9307.29 36.36 0.00 0.00 0.00 0.00 0.00 00:27:15.621 =================================================================================================================== 00:27:15.621 Total : 9307.29 36.36 0.00 0.00 0.00 0.00 0.00 00:27:15.621 00:27:16.556 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:16.556 Nvme0n1 : 8.00 9239.25 36.09 0.00 0.00 0.00 0.00 0.00 00:27:16.556 =================================================================================================================== 00:27:16.556 Total : 9239.25 36.09 0.00 0.00 0.00 0.00 0.00 00:27:16.556 00:27:17.493 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:17.493 Nvme0n1 : 9.00 8986.56 35.10 0.00 0.00 0.00 0.00 0.00 00:27:17.493 =================================================================================================================== 00:27:17.493 Total : 8986.56 35.10 0.00 0.00 0.00 0.00 0.00 00:27:17.493 00:27:18.429 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:18.429 Nvme0n1 : 10.00 8087.90 31.59 0.00 0.00 0.00 0.00 0.00 00:27:18.429 =================================================================================================================== 00:27:18.429 Total : 8087.90 31.59 0.00 0.00 0.00 0.00 0.00 00:27:18.429 00:27:18.429 00:27:18.429 Latency(us) 00:27:18.429 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:18.429 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:18.429 Nvme0n1 : 10.07 8045.56 31.43 0.00 0.00 15901.85 10874.97 1311406.84 00:27:18.429 =================================================================================================================== 00:27:18.429 Total : 8045.56 31.43 0.00 0.00 15901.85 10874.97 1311406.84 00:27:18.429 0 00:27:18.429 08:23:51 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 61325 00:27:18.429 08:23:51 -- common/autotest_common.sh@926 -- # '[' -z 61325 ']' 00:27:18.429 08:23:51 -- common/autotest_common.sh@930 -- # kill -0 61325 00:27:18.429 08:23:51 -- common/autotest_common.sh@931 -- # uname 00:27:18.429 08:23:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:18.430 08:23:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 61325 00:27:18.430 08:23:51 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:27:18.430 08:23:51 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:27:18.430 08:23:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 61325' 00:27:18.430 killing process with pid 61325 00:27:18.430 08:23:51 -- common/autotest_common.sh@945 -- # kill 61325 00:27:18.430 Received shutdown signal, test time was about 10.000000 seconds 00:27:18.430 00:27:18.430 Latency(us) 00:27:18.430 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:18.430 =================================================================================================================== 00:27:18.430 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:18.430 08:23:51 -- common/autotest_common.sh@950 -- # wait 61325 00:27:18.688 08:23:51 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:18.944 08:23:52 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 684fd628-2c2a-4e9f-92c0-4948881ec2f0 00:27:18.944 08:23:52 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:27:18.944 08:23:52 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:27:18.944 08:23:52 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:27:18.944 08:23:52 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 60996 00:27:18.944 08:23:52 -- target/nvmf_lvs_grow.sh@74 -- # wait 60996 00:27:19.201 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 60996 Killed "${NVMF_APP[@]}" "$@" 00:27:19.201 08:23:52 -- target/nvmf_lvs_grow.sh@74 -- # true 00:27:19.201 08:23:52 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:27:19.201 08:23:52 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:19.201 08:23:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:19.201 08:23:52 -- common/autotest_common.sh@10 -- # set +x 00:27:19.201 08:23:52 -- nvmf/common.sh@469 -- # nvmfpid=61480 00:27:19.201 08:23:52 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:27:19.201 08:23:52 -- nvmf/common.sh@470 -- # waitforlisten 61480 00:27:19.201 08:23:52 -- common/autotest_common.sh@819 -- # '[' -z 61480 ']' 00:27:19.201 08:23:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:19.201 08:23:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:19.201 08:23:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:19.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:19.201 08:23:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:19.201 08:23:52 -- common/autotest_common.sh@10 -- # set +x 00:27:19.201 [2024-04-17 08:23:52.353964] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:27:19.201 [2024-04-17 08:23:52.354119] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:19.201 [2024-04-17 08:23:52.493590] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:19.459 [2024-04-17 08:23:52.573048] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:19.459 [2024-04-17 08:23:52.573294] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:19.459 [2024-04-17 08:23:52.573305] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:19.459 [2024-04-17 08:23:52.573311] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:19.459 [2024-04-17 08:23:52.573361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:20.031 08:23:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:20.031 08:23:53 -- common/autotest_common.sh@852 -- # return 0 00:27:20.031 08:23:53 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:20.031 08:23:53 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:20.031 08:23:53 -- common/autotest_common.sh@10 -- # set +x 00:27:20.031 08:23:53 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:20.031 08:23:53 -- target/nvmf_lvs_grow.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:27:20.323 [2024-04-17 08:23:53.449039] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:27:20.323 [2024-04-17 08:23:53.449494] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:27:20.323 [2024-04-17 08:23:53.449759] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:27:20.323 08:23:53 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:27:20.323 08:23:53 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev c16b29cc-081e-4000-9864-20cc512a5361 00:27:20.323 08:23:53 -- common/autotest_common.sh@887 -- # local bdev_name=c16b29cc-081e-4000-9864-20cc512a5361 00:27:20.323 08:23:53 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:27:20.323 08:23:53 -- common/autotest_common.sh@889 -- # local i 00:27:20.323 08:23:53 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:27:20.323 08:23:53 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:27:20.323 08:23:53 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:27:20.583 08:23:53 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c16b29cc-081e-4000-9864-20cc512a5361 -t 2000 00:27:20.583 [ 00:27:20.583 { 00:27:20.583 "name": "c16b29cc-081e-4000-9864-20cc512a5361", 00:27:20.583 "aliases": [ 00:27:20.583 "lvs/lvol" 00:27:20.583 ], 00:27:20.583 "product_name": "Logical Volume", 00:27:20.583 "block_size": 4096, 00:27:20.583 "num_blocks": 38912, 00:27:20.583 "uuid": "c16b29cc-081e-4000-9864-20cc512a5361", 00:27:20.583 "assigned_rate_limits": { 00:27:20.583 "rw_ios_per_sec": 0, 00:27:20.583 "rw_mbytes_per_sec": 0, 00:27:20.583 "r_mbytes_per_sec": 0, 00:27:20.583 "w_mbytes_per_sec": 0 00:27:20.583 }, 00:27:20.583 "claimed": false, 00:27:20.583 "zoned": false, 00:27:20.583 "supported_io_types": { 00:27:20.583 "read": true, 00:27:20.583 "write": true, 00:27:20.583 "unmap": true, 00:27:20.583 "write_zeroes": true, 00:27:20.583 "flush": false, 00:27:20.583 "reset": true, 00:27:20.583 "compare": false, 00:27:20.583 "compare_and_write": false, 00:27:20.583 "abort": false, 00:27:20.583 "nvme_admin": false, 00:27:20.583 "nvme_io": false 00:27:20.583 }, 00:27:20.583 "driver_specific": { 00:27:20.583 "lvol": { 00:27:20.583 "lvol_store_uuid": "684fd628-2c2a-4e9f-92c0-4948881ec2f0", 00:27:20.583 "base_bdev": "aio_bdev", 00:27:20.583 "thin_provision": false, 00:27:20.583 "snapshot": false, 00:27:20.583 "clone": false, 00:27:20.583 "esnap_clone": false 00:27:20.583 } 00:27:20.583 } 00:27:20.583 } 00:27:20.583 ] 00:27:20.583 08:23:53 -- common/autotest_common.sh@895 -- # return 0 00:27:20.583 08:23:53 -- target/nvmf_lvs_grow.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 684fd628-2c2a-4e9f-92c0-4948881ec2f0 00:27:20.583 08:23:53 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:27:20.841 08:23:54 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:27:20.841 08:23:54 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:27:20.841 08:23:54 -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 684fd628-2c2a-4e9f-92c0-4948881ec2f0 00:27:21.099 08:23:54 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:27:21.099 08:23:54 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:27:21.358 [2024-04-17 08:23:54.476839] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:27:21.358 08:23:54 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 684fd628-2c2a-4e9f-92c0-4948881ec2f0 00:27:21.358 08:23:54 -- common/autotest_common.sh@640 -- # local es=0 00:27:21.358 08:23:54 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 684fd628-2c2a-4e9f-92c0-4948881ec2f0 00:27:21.358 08:23:54 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:21.358 08:23:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:21.358 08:23:54 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:21.358 08:23:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:21.358 08:23:54 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:21.358 08:23:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:21.358 08:23:54 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:21.358 08:23:54 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:27:21.358 08:23:54 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 684fd628-2c2a-4e9f-92c0-4948881ec2f0 00:27:21.617 request: 00:27:21.617 { 00:27:21.617 "uuid": "684fd628-2c2a-4e9f-92c0-4948881ec2f0", 00:27:21.617 "method": "bdev_lvol_get_lvstores", 00:27:21.617 "req_id": 1 00:27:21.617 } 00:27:21.617 Got JSON-RPC error response 00:27:21.617 response: 00:27:21.617 { 00:27:21.617 "code": -19, 00:27:21.617 "message": "No such device" 00:27:21.617 } 00:27:21.617 08:23:54 -- common/autotest_common.sh@643 -- # es=1 00:27:21.617 08:23:54 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:21.617 08:23:54 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:27:21.617 08:23:54 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:21.617 08:23:54 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:27:21.617 aio_bdev 00:27:21.617 08:23:54 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev c16b29cc-081e-4000-9864-20cc512a5361 00:27:21.617 08:23:54 -- common/autotest_common.sh@887 -- # local bdev_name=c16b29cc-081e-4000-9864-20cc512a5361 00:27:21.617 08:23:54 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:27:21.617 08:23:54 -- common/autotest_common.sh@889 -- # local i 00:27:21.617 08:23:54 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:27:21.617 08:23:54 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:27:21.617 08:23:54 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:27:21.876 08:23:55 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c16b29cc-081e-4000-9864-20cc512a5361 -t 2000 00:27:22.134 [ 00:27:22.134 { 00:27:22.134 "name": "c16b29cc-081e-4000-9864-20cc512a5361", 00:27:22.134 "aliases": [ 00:27:22.134 "lvs/lvol" 00:27:22.134 ], 00:27:22.134 "product_name": "Logical Volume", 00:27:22.134 "block_size": 4096, 00:27:22.134 "num_blocks": 38912, 00:27:22.134 "uuid": "c16b29cc-081e-4000-9864-20cc512a5361", 00:27:22.134 "assigned_rate_limits": { 00:27:22.134 "rw_ios_per_sec": 0, 00:27:22.134 "rw_mbytes_per_sec": 0, 00:27:22.134 "r_mbytes_per_sec": 0, 00:27:22.134 "w_mbytes_per_sec": 0 00:27:22.134 }, 00:27:22.134 "claimed": false, 00:27:22.134 "zoned": false, 00:27:22.134 "supported_io_types": { 00:27:22.134 "read": true, 00:27:22.134 "write": true, 00:27:22.134 "unmap": true, 00:27:22.134 "write_zeroes": true, 00:27:22.134 "flush": false, 00:27:22.134 "reset": true, 00:27:22.134 "compare": false, 00:27:22.134 "compare_and_write": false, 00:27:22.134 "abort": false, 00:27:22.134 "nvme_admin": false, 00:27:22.134 "nvme_io": false 00:27:22.134 }, 00:27:22.134 "driver_specific": { 00:27:22.134 "lvol": { 00:27:22.134 "lvol_store_uuid": "684fd628-2c2a-4e9f-92c0-4948881ec2f0", 00:27:22.134 "base_bdev": "aio_bdev", 00:27:22.134 "thin_provision": false, 00:27:22.134 "snapshot": false, 00:27:22.134 "clone": false, 00:27:22.134 "esnap_clone": false 00:27:22.134 } 00:27:22.134 } 00:27:22.134 } 00:27:22.134 ] 00:27:22.134 08:23:55 -- common/autotest_common.sh@895 -- # return 0 00:27:22.134 08:23:55 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 684fd628-2c2a-4e9f-92c0-4948881ec2f0 00:27:22.134 08:23:55 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:27:22.392 08:23:55 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:27:22.392 08:23:55 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 684fd628-2c2a-4e9f-92c0-4948881ec2f0 00:27:22.392 08:23:55 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:27:22.651 08:23:55 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:27:22.651 08:23:55 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete c16b29cc-081e-4000-9864-20cc512a5361 00:27:22.651 08:23:55 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 684fd628-2c2a-4e9f-92c0-4948881ec2f0 00:27:22.910 08:23:56 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:27:23.168 08:23:56 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:27:23.427 ************************************ 00:27:23.427 END TEST lvs_grow_dirty 00:27:23.427 ************************************ 00:27:23.427 00:27:23.427 real 0m18.623s 00:27:23.427 user 0m39.730s 00:27:23.427 sys 0m6.606s 00:27:23.427 08:23:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:23.427 08:23:56 -- common/autotest_common.sh@10 -- # set +x 00:27:23.427 08:23:56 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:27:23.427 08:23:56 -- common/autotest_common.sh@796 -- # type=--id 00:27:23.427 08:23:56 -- common/autotest_common.sh@797 -- # id=0 00:27:23.427 08:23:56 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:27:23.427 08:23:56 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:27:23.427 08:23:56 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:27:23.427 08:23:56 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:27:23.427 08:23:56 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:27:23.427 08:23:56 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:27:23.427 nvmf_trace.0 00:27:23.685 08:23:56 -- common/autotest_common.sh@811 -- # return 0 00:27:23.685 08:23:56 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:27:23.685 08:23:56 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:23.685 08:23:56 -- nvmf/common.sh@116 -- # sync 00:27:23.685 08:23:56 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:23.685 08:23:56 -- nvmf/common.sh@119 -- # set +e 00:27:23.685 08:23:56 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:23.685 08:23:56 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:23.685 rmmod nvme_tcp 00:27:23.685 rmmod nvme_fabrics 00:27:23.685 rmmod nvme_keyring 00:27:23.685 08:23:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:23.685 08:23:56 -- nvmf/common.sh@123 -- # set -e 00:27:23.685 08:23:56 -- nvmf/common.sh@124 -- # return 0 00:27:23.685 08:23:56 -- nvmf/common.sh@477 -- # '[' -n 61480 ']' 00:27:23.685 08:23:56 -- nvmf/common.sh@478 -- # killprocess 61480 00:27:23.685 08:23:56 -- common/autotest_common.sh@926 -- # '[' -z 61480 ']' 00:27:23.685 08:23:56 -- common/autotest_common.sh@930 -- # kill -0 61480 00:27:23.685 08:23:56 -- common/autotest_common.sh@931 -- # uname 00:27:23.685 08:23:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:23.685 08:23:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 61480 00:27:23.685 killing process with pid 61480 00:27:23.685 08:23:56 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:23.685 08:23:56 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:23.685 08:23:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 61480' 00:27:23.685 08:23:56 -- common/autotest_common.sh@945 -- # kill 61480 00:27:23.685 08:23:56 -- common/autotest_common.sh@950 -- # wait 61480 00:27:23.943 08:23:57 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:23.943 08:23:57 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:23.943 08:23:57 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:23.943 08:23:57 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:23.943 08:23:57 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:23.943 08:23:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:23.943 08:23:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:23.943 08:23:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:23.943 08:23:57 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:27:23.943 00:27:23.943 real 0m37.428s 00:27:23.943 user 1m0.668s 00:27:23.943 sys 0m9.496s 00:27:23.943 08:23:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:23.943 08:23:57 -- common/autotest_common.sh@10 -- # set +x 00:27:23.943 ************************************ 00:27:23.943 END TEST nvmf_lvs_grow 00:27:23.943 ************************************ 00:27:24.203 08:23:57 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:27:24.203 08:23:57 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:24.203 08:23:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:24.203 08:23:57 -- common/autotest_common.sh@10 -- # set +x 00:27:24.203 ************************************ 00:27:24.203 START TEST nvmf_bdev_io_wait 00:27:24.203 ************************************ 00:27:24.203 08:23:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:27:24.203 * Looking for test storage... 00:27:24.203 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:27:24.203 08:23:57 -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:24.203 08:23:57 -- nvmf/common.sh@7 -- # uname -s 00:27:24.203 08:23:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:24.203 08:23:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:24.203 08:23:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:24.203 08:23:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:24.203 08:23:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:24.203 08:23:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:24.203 08:23:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:24.203 08:23:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:24.203 08:23:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:24.203 08:23:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:24.203 08:23:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ce38300-f67f-48af-81f9-d51a7c54746d 00:27:24.203 08:23:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ce38300-f67f-48af-81f9-d51a7c54746d 00:27:24.203 08:23:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:24.203 08:23:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:24.203 08:23:57 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:24.203 08:23:57 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:24.203 08:23:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:24.203 08:23:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:24.203 08:23:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:24.204 08:23:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:24.204 08:23:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:24.204 08:23:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:24.204 08:23:57 -- paths/export.sh@5 -- # export PATH 00:27:24.204 08:23:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:24.204 08:23:57 -- nvmf/common.sh@46 -- # : 0 00:27:24.204 08:23:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:24.204 08:23:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:24.204 08:23:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:24.204 08:23:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:24.204 08:23:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:24.204 08:23:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:24.204 08:23:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:24.204 08:23:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:24.204 08:23:57 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:24.204 08:23:57 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:24.204 08:23:57 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:27:24.204 08:23:57 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:24.204 08:23:57 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:24.204 08:23:57 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:24.204 08:23:57 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:24.204 08:23:57 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:24.204 08:23:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:24.204 08:23:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:24.204 08:23:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:24.204 08:23:57 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:27:24.204 08:23:57 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:27:24.204 08:23:57 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:27:24.204 08:23:57 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:27:24.204 08:23:57 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:27:24.204 08:23:57 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:27:24.204 08:23:57 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:24.204 08:23:57 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:24.204 08:23:57 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:27:24.204 08:23:57 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:27:24.204 08:23:57 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:24.204 08:23:57 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:24.204 08:23:57 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:24.204 08:23:57 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:24.204 08:23:57 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:24.204 08:23:57 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:24.204 08:23:57 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:24.204 08:23:57 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:24.204 08:23:57 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:27:24.204 08:23:57 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:27:24.204 Cannot find device "nvmf_tgt_br" 00:27:24.204 08:23:57 -- nvmf/common.sh@154 -- # true 00:27:24.204 08:23:57 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:27:24.204 Cannot find device "nvmf_tgt_br2" 00:27:24.204 08:23:57 -- nvmf/common.sh@155 -- # true 00:27:24.204 08:23:57 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:27:24.204 08:23:57 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:27:24.204 Cannot find device "nvmf_tgt_br" 00:27:24.204 08:23:57 -- nvmf/common.sh@157 -- # true 00:27:24.204 08:23:57 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:27:24.462 Cannot find device "nvmf_tgt_br2" 00:27:24.462 08:23:57 -- nvmf/common.sh@158 -- # true 00:27:24.463 08:23:57 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:27:24.463 08:23:57 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:27:24.463 08:23:57 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:24.463 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:24.463 08:23:57 -- nvmf/common.sh@161 -- # true 00:27:24.463 08:23:57 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:24.463 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:24.463 08:23:57 -- nvmf/common.sh@162 -- # true 00:27:24.463 08:23:57 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:27:24.463 08:23:57 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:24.463 08:23:57 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:24.463 08:23:57 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:24.463 08:23:57 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:24.463 08:23:57 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:24.463 08:23:57 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:24.463 08:23:57 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:27:24.463 08:23:57 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:27:24.463 08:23:57 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:27:24.463 08:23:57 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:27:24.463 08:23:57 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:27:24.463 08:23:57 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:27:24.463 08:23:57 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:24.463 08:23:57 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:24.463 08:23:57 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:24.463 08:23:57 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:27:24.463 08:23:57 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:27:24.463 08:23:57 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:27:24.463 08:23:57 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:24.463 08:23:57 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:24.463 08:23:57 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:24.463 08:23:57 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:24.463 08:23:57 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:27:24.463 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:24.463 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.107 ms 00:27:24.463 00:27:24.463 --- 10.0.0.2 ping statistics --- 00:27:24.463 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:24.463 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:27:24.463 08:23:57 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:27:24.463 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:24.463 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:27:24.463 00:27:24.463 --- 10.0.0.3 ping statistics --- 00:27:24.463 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:24.463 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:27:24.463 08:23:57 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:24.463 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:24.463 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:27:24.463 00:27:24.463 --- 10.0.0.1 ping statistics --- 00:27:24.463 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:24.463 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:27:24.463 08:23:57 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:24.463 08:23:57 -- nvmf/common.sh@421 -- # return 0 00:27:24.463 08:23:57 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:24.463 08:23:57 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:24.463 08:23:57 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:24.463 08:23:57 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:24.463 08:23:57 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:24.463 08:23:57 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:24.463 08:23:57 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:24.463 08:23:57 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:24.463 08:23:57 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:24.463 08:23:57 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:24.463 08:23:57 -- common/autotest_common.sh@10 -- # set +x 00:27:24.723 08:23:57 -- nvmf/common.sh@469 -- # nvmfpid=61784 00:27:24.723 08:23:57 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:24.723 08:23:57 -- nvmf/common.sh@470 -- # waitforlisten 61784 00:27:24.723 08:23:57 -- common/autotest_common.sh@819 -- # '[' -z 61784 ']' 00:27:24.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:24.723 08:23:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:24.723 08:23:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:24.723 08:23:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:24.723 08:23:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:24.723 08:23:57 -- common/autotest_common.sh@10 -- # set +x 00:27:24.723 [2024-04-17 08:23:57.845144] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:27:24.723 [2024-04-17 08:23:57.845231] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:24.723 [2024-04-17 08:23:57.971716] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:24.983 [2024-04-17 08:23:58.073380] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:24.983 [2024-04-17 08:23:58.073515] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:24.983 [2024-04-17 08:23:58.073524] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:24.983 [2024-04-17 08:23:58.073530] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:24.983 [2024-04-17 08:23:58.073652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:24.983 [2024-04-17 08:23:58.073953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:24.983 [2024-04-17 08:23:58.073968] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:24.983 [2024-04-17 08:23:58.073969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:25.553 08:23:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:25.553 08:23:58 -- common/autotest_common.sh@852 -- # return 0 00:27:25.553 08:23:58 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:25.553 08:23:58 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:25.553 08:23:58 -- common/autotest_common.sh@10 -- # set +x 00:27:25.553 08:23:58 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:25.553 08:23:58 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:27:25.553 08:23:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:25.553 08:23:58 -- common/autotest_common.sh@10 -- # set +x 00:27:25.553 08:23:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:25.553 08:23:58 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:27:25.553 08:23:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:25.553 08:23:58 -- common/autotest_common.sh@10 -- # set +x 00:27:25.553 08:23:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:25.553 08:23:58 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:25.553 08:23:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:25.553 08:23:58 -- common/autotest_common.sh@10 -- # set +x 00:27:25.553 [2024-04-17 08:23:58.866688] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:25.553 08:23:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:25.553 08:23:58 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:25.553 08:23:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:25.553 08:23:58 -- common/autotest_common.sh@10 -- # set +x 00:27:25.813 Malloc0 00:27:25.813 08:23:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:25.813 08:23:58 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:25.813 08:23:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:25.813 08:23:58 -- common/autotest_common.sh@10 -- # set +x 00:27:25.813 08:23:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:25.813 08:23:58 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:25.813 08:23:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:25.813 08:23:58 -- common/autotest_common.sh@10 -- # set +x 00:27:25.813 08:23:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:25.813 08:23:58 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:25.813 08:23:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:25.813 08:23:58 -- common/autotest_common.sh@10 -- # set +x 00:27:25.813 [2024-04-17 08:23:58.938061] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:25.813 08:23:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:25.813 08:23:58 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=61822 00:27:25.813 08:23:58 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:27:25.813 08:23:58 -- nvmf/common.sh@520 -- # config=() 00:27:25.813 08:23:58 -- nvmf/common.sh@520 -- # local subsystem config 00:27:25.813 08:23:58 -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:27:25.813 08:23:58 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:25.813 08:23:58 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:25.813 { 00:27:25.813 "params": { 00:27:25.813 "name": "Nvme$subsystem", 00:27:25.813 "trtype": "$TEST_TRANSPORT", 00:27:25.813 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:25.813 "adrfam": "ipv4", 00:27:25.813 "trsvcid": "$NVMF_PORT", 00:27:25.813 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:25.813 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:25.813 "hdgst": ${hdgst:-false}, 00:27:25.813 "ddgst": ${ddgst:-false} 00:27:25.813 }, 00:27:25.813 "method": "bdev_nvme_attach_controller" 00:27:25.813 } 00:27:25.813 EOF 00:27:25.813 )") 00:27:25.813 08:23:58 -- target/bdev_io_wait.sh@30 -- # READ_PID=61824 00:27:25.813 08:23:58 -- nvmf/common.sh@542 -- # cat 00:27:25.813 08:23:58 -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:27:25.813 08:23:58 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:27:25.813 08:23:58 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=61827 00:27:25.813 08:23:58 -- nvmf/common.sh@520 -- # config=() 00:27:25.813 08:23:58 -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:27:25.813 08:23:58 -- nvmf/common.sh@520 -- # local subsystem config 00:27:25.813 08:23:58 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:25.813 08:23:58 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:25.813 { 00:27:25.813 "params": { 00:27:25.813 "name": "Nvme$subsystem", 00:27:25.813 "trtype": "$TEST_TRANSPORT", 00:27:25.813 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:25.813 "adrfam": "ipv4", 00:27:25.813 "trsvcid": "$NVMF_PORT", 00:27:25.813 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:25.813 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:25.813 "hdgst": ${hdgst:-false}, 00:27:25.813 "ddgst": ${ddgst:-false} 00:27:25.813 }, 00:27:25.813 "method": "bdev_nvme_attach_controller" 00:27:25.813 } 00:27:25.813 EOF 00:27:25.813 )") 00:27:25.813 08:23:58 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:27:25.813 08:23:58 -- nvmf/common.sh@520 -- # config=() 00:27:25.813 08:23:58 -- nvmf/common.sh@520 -- # local subsystem config 00:27:25.813 08:23:58 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:25.813 08:23:58 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:25.813 { 00:27:25.813 "params": { 00:27:25.813 "name": "Nvme$subsystem", 00:27:25.813 "trtype": "$TEST_TRANSPORT", 00:27:25.813 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:25.813 "adrfam": "ipv4", 00:27:25.813 "trsvcid": "$NVMF_PORT", 00:27:25.813 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:25.813 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:25.813 "hdgst": ${hdgst:-false}, 00:27:25.813 "ddgst": ${ddgst:-false} 00:27:25.813 }, 00:27:25.813 "method": "bdev_nvme_attach_controller" 00:27:25.813 } 00:27:25.813 EOF 00:27:25.813 )") 00:27:25.813 08:23:58 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=61830 00:27:25.813 08:23:58 -- nvmf/common.sh@542 -- # cat 00:27:25.813 08:23:58 -- nvmf/common.sh@544 -- # jq . 00:27:25.813 08:23:58 -- nvmf/common.sh@542 -- # cat 00:27:25.813 08:23:58 -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:27:25.813 08:23:58 -- nvmf/common.sh@544 -- # jq . 00:27:25.813 08:23:58 -- target/bdev_io_wait.sh@35 -- # sync 00:27:25.813 08:23:58 -- nvmf/common.sh@545 -- # IFS=, 00:27:25.813 08:23:58 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:27:25.813 "params": { 00:27:25.813 "name": "Nvme1", 00:27:25.813 "trtype": "tcp", 00:27:25.813 "traddr": "10.0.0.2", 00:27:25.813 "adrfam": "ipv4", 00:27:25.813 "trsvcid": "4420", 00:27:25.813 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:25.813 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:25.813 "hdgst": false, 00:27:25.813 "ddgst": false 00:27:25.813 }, 00:27:25.813 "method": "bdev_nvme_attach_controller" 00:27:25.813 }' 00:27:25.813 08:23:58 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:27:25.813 08:23:58 -- nvmf/common.sh@520 -- # config=() 00:27:25.813 08:23:58 -- nvmf/common.sh@520 -- # local subsystem config 00:27:25.813 08:23:58 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:25.813 08:23:58 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:25.813 { 00:27:25.813 "params": { 00:27:25.813 "name": "Nvme$subsystem", 00:27:25.813 "trtype": "$TEST_TRANSPORT", 00:27:25.813 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:25.813 "adrfam": "ipv4", 00:27:25.813 "trsvcid": "$NVMF_PORT", 00:27:25.813 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:25.813 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:25.813 "hdgst": ${hdgst:-false}, 00:27:25.813 "ddgst": ${ddgst:-false} 00:27:25.813 }, 00:27:25.813 "method": "bdev_nvme_attach_controller" 00:27:25.813 } 00:27:25.813 EOF 00:27:25.813 )") 00:27:25.813 08:23:58 -- nvmf/common.sh@545 -- # IFS=, 00:27:25.813 08:23:58 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:27:25.813 "params": { 00:27:25.813 "name": "Nvme1", 00:27:25.813 "trtype": "tcp", 00:27:25.813 "traddr": "10.0.0.2", 00:27:25.813 "adrfam": "ipv4", 00:27:25.813 "trsvcid": "4420", 00:27:25.813 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:25.813 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:25.813 "hdgst": false, 00:27:25.813 "ddgst": false 00:27:25.813 }, 00:27:25.813 "method": "bdev_nvme_attach_controller" 00:27:25.813 }' 00:27:25.813 08:23:58 -- nvmf/common.sh@544 -- # jq . 00:27:25.813 08:23:58 -- nvmf/common.sh@542 -- # cat 00:27:25.813 08:23:58 -- nvmf/common.sh@545 -- # IFS=, 00:27:25.813 08:23:58 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:27:25.813 "params": { 00:27:25.813 "name": "Nvme1", 00:27:25.813 "trtype": "tcp", 00:27:25.813 "traddr": "10.0.0.2", 00:27:25.813 "adrfam": "ipv4", 00:27:25.813 "trsvcid": "4420", 00:27:25.813 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:25.813 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:25.813 "hdgst": false, 00:27:25.813 "ddgst": false 00:27:25.813 }, 00:27:25.813 "method": "bdev_nvme_attach_controller" 00:27:25.813 }' 00:27:25.813 08:23:58 -- nvmf/common.sh@544 -- # jq . 00:27:25.813 08:23:58 -- nvmf/common.sh@545 -- # IFS=, 00:27:25.813 08:23:58 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:27:25.813 "params": { 00:27:25.813 "name": "Nvme1", 00:27:25.813 "trtype": "tcp", 00:27:25.813 "traddr": "10.0.0.2", 00:27:25.813 "adrfam": "ipv4", 00:27:25.813 "trsvcid": "4420", 00:27:25.813 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:25.813 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:25.813 "hdgst": false, 00:27:25.813 "ddgst": false 00:27:25.813 }, 00:27:25.813 "method": "bdev_nvme_attach_controller" 00:27:25.813 }' 00:27:25.813 [2024-04-17 08:23:58.993435] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:27:25.813 [2024-04-17 08:23:58.993553] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:27:25.813 [2024-04-17 08:23:58.998800] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:27:25.813 [2024-04-17 08:23:58.998904] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:27:25.813 08:23:59 -- target/bdev_io_wait.sh@37 -- # wait 61822 00:27:25.813 [2024-04-17 08:23:59.016612] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:27:25.814 [2024-04-17 08:23:59.016741] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:27:25.814 [2024-04-17 08:23:59.020535] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:27:25.814 [2024-04-17 08:23:59.020640] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:27:26.073 [2024-04-17 08:23:59.187897] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:26.073 [2024-04-17 08:23:59.246011] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:26.073 [2024-04-17 08:23:59.272873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:27:26.073 [2024-04-17 08:23:59.333583] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:26.073 [2024-04-17 08:23:59.357411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:27:26.073 [2024-04-17 08:23:59.371245] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:26.073 Running I/O for 1 seconds... 00:27:26.332 [2024-04-17 08:23:59.424672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:27:26.332 [2024-04-17 08:23:59.475808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:27:26.332 Running I/O for 1 seconds... 00:27:26.332 Running I/O for 1 seconds... 00:27:26.332 Running I/O for 1 seconds... 00:27:27.295 00:27:27.295 Latency(us) 00:27:27.295 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:27.295 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:27:27.295 Nvme1n1 : 1.02 6624.72 25.88 0.00 0.00 19063.72 8013.14 36631.48 00:27:27.295 =================================================================================================================== 00:27:27.295 Total : 6624.72 25.88 0.00 0.00 19063.72 8013.14 36631.48 00:27:27.295 00:27:27.295 Latency(us) 00:27:27.295 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:27.295 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:27:27.295 Nvme1n1 : 1.01 6307.70 24.64 0.00 0.00 20219.06 6725.31 37547.26 00:27:27.295 =================================================================================================================== 00:27:27.295 Total : 6307.70 24.64 0.00 0.00 20219.06 6725.31 37547.26 00:27:27.295 00:27:27.295 Latency(us) 00:27:27.295 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:27.295 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:27:27.295 Nvme1n1 : 1.01 10295.16 40.22 0.00 0.00 12393.30 5523.34 24268.35 00:27:27.295 =================================================================================================================== 00:27:27.295 Total : 10295.16 40.22 0.00 0.00 12393.30 5523.34 24268.35 00:27:27.554 00:27:27.554 Latency(us) 00:27:27.554 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:27.554 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:27:27.554 Nvme1n1 : 1.00 188583.96 736.66 0.00 0.00 676.35 309.44 1051.72 00:27:27.554 =================================================================================================================== 00:27:27.554 Total : 188583.96 736.66 0.00 0.00 676.35 309.44 1051.72 00:27:27.554 08:24:00 -- target/bdev_io_wait.sh@38 -- # wait 61824 00:27:27.554 08:24:00 -- target/bdev_io_wait.sh@39 -- # wait 61827 00:27:27.554 08:24:00 -- target/bdev_io_wait.sh@40 -- # wait 61830 00:27:27.554 08:24:00 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:27.554 08:24:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:27.554 08:24:00 -- common/autotest_common.sh@10 -- # set +x 00:27:27.554 08:24:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:27.554 08:24:00 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:27:27.554 08:24:00 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:27:27.554 08:24:00 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:27.554 08:24:00 -- nvmf/common.sh@116 -- # sync 00:27:27.814 08:24:00 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:27.814 08:24:00 -- nvmf/common.sh@119 -- # set +e 00:27:27.814 08:24:00 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:27.814 08:24:00 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:27.814 rmmod nvme_tcp 00:27:27.814 rmmod nvme_fabrics 00:27:27.814 rmmod nvme_keyring 00:27:27.814 08:24:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:27.814 08:24:00 -- nvmf/common.sh@123 -- # set -e 00:27:27.814 08:24:00 -- nvmf/common.sh@124 -- # return 0 00:27:27.814 08:24:00 -- nvmf/common.sh@477 -- # '[' -n 61784 ']' 00:27:27.814 08:24:00 -- nvmf/common.sh@478 -- # killprocess 61784 00:27:27.814 08:24:00 -- common/autotest_common.sh@926 -- # '[' -z 61784 ']' 00:27:27.814 08:24:00 -- common/autotest_common.sh@930 -- # kill -0 61784 00:27:27.814 08:24:00 -- common/autotest_common.sh@931 -- # uname 00:27:27.814 08:24:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:27.814 08:24:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 61784 00:27:27.814 killing process with pid 61784 00:27:27.814 08:24:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:27.814 08:24:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:27.814 08:24:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 61784' 00:27:27.814 08:24:01 -- common/autotest_common.sh@945 -- # kill 61784 00:27:27.814 08:24:01 -- common/autotest_common.sh@950 -- # wait 61784 00:27:28.074 08:24:01 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:28.074 08:24:01 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:28.074 08:24:01 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:28.074 08:24:01 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:28.074 08:24:01 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:28.074 08:24:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:28.074 08:24:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:28.074 08:24:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:28.074 08:24:01 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:27:28.074 00:27:28.074 real 0m4.025s 00:27:28.074 user 0m17.736s 00:27:28.074 sys 0m1.801s 00:27:28.074 08:24:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:28.074 08:24:01 -- common/autotest_common.sh@10 -- # set +x 00:27:28.074 ************************************ 00:27:28.074 END TEST nvmf_bdev_io_wait 00:27:28.074 ************************************ 00:27:28.074 08:24:01 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:27:28.074 08:24:01 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:28.074 08:24:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:28.074 08:24:01 -- common/autotest_common.sh@10 -- # set +x 00:27:28.074 ************************************ 00:27:28.074 START TEST nvmf_queue_depth 00:27:28.074 ************************************ 00:27:28.074 08:24:01 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:27:28.335 * Looking for test storage... 00:27:28.335 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:27:28.335 08:24:01 -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:28.335 08:24:01 -- nvmf/common.sh@7 -- # uname -s 00:27:28.335 08:24:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:28.335 08:24:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:28.335 08:24:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:28.335 08:24:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:28.335 08:24:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:28.335 08:24:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:28.335 08:24:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:28.335 08:24:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:28.335 08:24:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:28.335 08:24:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:28.335 08:24:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ce38300-f67f-48af-81f9-d51a7c54746d 00:27:28.335 08:24:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ce38300-f67f-48af-81f9-d51a7c54746d 00:27:28.335 08:24:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:28.335 08:24:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:28.335 08:24:01 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:28.335 08:24:01 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:28.335 08:24:01 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:28.335 08:24:01 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:28.335 08:24:01 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:28.335 08:24:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:28.335 08:24:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:28.335 08:24:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:28.335 08:24:01 -- paths/export.sh@5 -- # export PATH 00:27:28.335 08:24:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:28.335 08:24:01 -- nvmf/common.sh@46 -- # : 0 00:27:28.335 08:24:01 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:28.335 08:24:01 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:28.335 08:24:01 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:28.335 08:24:01 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:28.335 08:24:01 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:28.335 08:24:01 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:28.335 08:24:01 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:28.335 08:24:01 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:28.335 08:24:01 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:27:28.335 08:24:01 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:27:28.335 08:24:01 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:28.335 08:24:01 -- target/queue_depth.sh@19 -- # nvmftestinit 00:27:28.335 08:24:01 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:28.335 08:24:01 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:28.335 08:24:01 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:28.335 08:24:01 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:28.335 08:24:01 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:28.335 08:24:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:28.335 08:24:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:28.335 08:24:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:28.335 08:24:01 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:27:28.335 08:24:01 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:27:28.335 08:24:01 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:27:28.335 08:24:01 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:27:28.335 08:24:01 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:27:28.335 08:24:01 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:27:28.335 08:24:01 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:28.335 08:24:01 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:28.335 08:24:01 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:27:28.335 08:24:01 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:27:28.335 08:24:01 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:28.335 08:24:01 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:28.335 08:24:01 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:28.335 08:24:01 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:28.335 08:24:01 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:28.335 08:24:01 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:28.335 08:24:01 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:28.335 08:24:01 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:28.335 08:24:01 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:27:28.335 08:24:01 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:27:28.335 Cannot find device "nvmf_tgt_br" 00:27:28.335 08:24:01 -- nvmf/common.sh@154 -- # true 00:27:28.335 08:24:01 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:27:28.335 Cannot find device "nvmf_tgt_br2" 00:27:28.335 08:24:01 -- nvmf/common.sh@155 -- # true 00:27:28.335 08:24:01 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:27:28.335 08:24:01 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:27:28.335 Cannot find device "nvmf_tgt_br" 00:27:28.335 08:24:01 -- nvmf/common.sh@157 -- # true 00:27:28.335 08:24:01 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:27:28.335 Cannot find device "nvmf_tgt_br2" 00:27:28.335 08:24:01 -- nvmf/common.sh@158 -- # true 00:27:28.335 08:24:01 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:27:28.335 08:24:01 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:27:28.335 08:24:01 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:28.335 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:28.335 08:24:01 -- nvmf/common.sh@161 -- # true 00:27:28.335 08:24:01 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:28.335 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:28.335 08:24:01 -- nvmf/common.sh@162 -- # true 00:27:28.335 08:24:01 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:27:28.335 08:24:01 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:28.335 08:24:01 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:28.595 08:24:01 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:28.595 08:24:01 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:28.595 08:24:01 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:28.595 08:24:01 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:28.595 08:24:01 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:27:28.595 08:24:01 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:27:28.595 08:24:01 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:27:28.595 08:24:01 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:27:28.595 08:24:01 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:27:28.595 08:24:01 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:27:28.595 08:24:01 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:28.595 08:24:01 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:28.595 08:24:01 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:28.595 08:24:01 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:27:28.595 08:24:01 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:27:28.595 08:24:01 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:27:28.595 08:24:01 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:28.595 08:24:01 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:28.595 08:24:01 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:28.595 08:24:01 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:28.595 08:24:01 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:27:28.595 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:28.595 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:27:28.595 00:27:28.595 --- 10.0.0.2 ping statistics --- 00:27:28.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:28.595 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:27:28.595 08:24:01 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:27:28.595 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:28.595 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.110 ms 00:27:28.595 00:27:28.595 --- 10.0.0.3 ping statistics --- 00:27:28.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:28.595 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:27:28.595 08:24:01 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:28.595 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:28.595 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:27:28.595 00:27:28.595 --- 10.0.0.1 ping statistics --- 00:27:28.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:28.595 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:27:28.595 08:24:01 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:28.595 08:24:01 -- nvmf/common.sh@421 -- # return 0 00:27:28.595 08:24:01 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:28.595 08:24:01 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:28.595 08:24:01 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:28.595 08:24:01 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:28.595 08:24:01 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:28.595 08:24:01 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:28.595 08:24:01 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:28.595 08:24:01 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:27:28.595 08:24:01 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:28.595 08:24:01 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:28.595 08:24:01 -- common/autotest_common.sh@10 -- # set +x 00:27:28.595 08:24:01 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:27:28.595 08:24:01 -- nvmf/common.sh@469 -- # nvmfpid=62059 00:27:28.595 08:24:01 -- nvmf/common.sh@470 -- # waitforlisten 62059 00:27:28.595 08:24:01 -- common/autotest_common.sh@819 -- # '[' -z 62059 ']' 00:27:28.595 08:24:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:28.595 08:24:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:28.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:28.595 08:24:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:28.595 08:24:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:28.595 08:24:01 -- common/autotest_common.sh@10 -- # set +x 00:27:28.855 [2024-04-17 08:24:01.935137] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:27:28.855 [2024-04-17 08:24:01.935205] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:28.855 [2024-04-17 08:24:02.075148] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:28.855 [2024-04-17 08:24:02.169896] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:28.855 [2024-04-17 08:24:02.170126] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:28.855 [2024-04-17 08:24:02.170153] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:28.855 [2024-04-17 08:24:02.170195] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:28.855 [2024-04-17 08:24:02.170238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:29.793 08:24:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:29.793 08:24:02 -- common/autotest_common.sh@852 -- # return 0 00:27:29.793 08:24:02 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:29.793 08:24:02 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:29.793 08:24:02 -- common/autotest_common.sh@10 -- # set +x 00:27:29.793 08:24:02 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:29.793 08:24:02 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:29.794 08:24:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:29.794 08:24:02 -- common/autotest_common.sh@10 -- # set +x 00:27:29.794 [2024-04-17 08:24:02.924049] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:29.794 08:24:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:29.794 08:24:02 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:29.794 08:24:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:29.794 08:24:02 -- common/autotest_common.sh@10 -- # set +x 00:27:29.794 Malloc0 00:27:29.794 08:24:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:29.794 08:24:02 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:29.794 08:24:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:29.794 08:24:02 -- common/autotest_common.sh@10 -- # set +x 00:27:29.794 08:24:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:29.794 08:24:02 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:29.794 08:24:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:29.794 08:24:02 -- common/autotest_common.sh@10 -- # set +x 00:27:29.794 08:24:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:29.794 08:24:02 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:29.794 08:24:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:29.794 08:24:02 -- common/autotest_common.sh@10 -- # set +x 00:27:29.794 [2024-04-17 08:24:02.994099] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:29.794 08:24:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:29.794 08:24:02 -- target/queue_depth.sh@30 -- # bdevperf_pid=62092 00:27:29.794 08:24:02 -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:27:29.794 08:24:03 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:29.794 08:24:03 -- target/queue_depth.sh@33 -- # waitforlisten 62092 /var/tmp/bdevperf.sock 00:27:29.794 08:24:03 -- common/autotest_common.sh@819 -- # '[' -z 62092 ']' 00:27:29.794 08:24:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:29.794 08:24:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:29.794 08:24:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:29.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:29.794 08:24:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:29.794 08:24:03 -- common/autotest_common.sh@10 -- # set +x 00:27:29.794 [2024-04-17 08:24:03.049292] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:27:29.794 [2024-04-17 08:24:03.049462] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62092 ] 00:27:30.052 [2024-04-17 08:24:03.188000] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:30.052 [2024-04-17 08:24:03.288459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:30.617 08:24:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:30.617 08:24:03 -- common/autotest_common.sh@852 -- # return 0 00:27:30.617 08:24:03 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:30.617 08:24:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:30.617 08:24:03 -- common/autotest_common.sh@10 -- # set +x 00:27:30.875 NVMe0n1 00:27:30.875 08:24:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:30.875 08:24:04 -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:30.875 Running I/O for 10 seconds... 00:27:40.874 00:27:40.874 Latency(us) 00:27:40.874 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:40.874 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:27:40.874 Verification LBA range: start 0x0 length 0x4000 00:27:40.874 NVMe0n1 : 10.06 15879.22 62.03 0.00 0.00 64262.63 12019.70 76010.31 00:27:40.874 =================================================================================================================== 00:27:40.874 Total : 15879.22 62.03 0.00 0.00 64262.63 12019.70 76010.31 00:27:40.874 0 00:27:40.874 08:24:14 -- target/queue_depth.sh@39 -- # killprocess 62092 00:27:40.874 08:24:14 -- common/autotest_common.sh@926 -- # '[' -z 62092 ']' 00:27:40.874 08:24:14 -- common/autotest_common.sh@930 -- # kill -0 62092 00:27:40.874 08:24:14 -- common/autotest_common.sh@931 -- # uname 00:27:40.874 08:24:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:40.874 08:24:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 62092 00:27:40.874 killing process with pid 62092 00:27:40.874 Received shutdown signal, test time was about 10.000000 seconds 00:27:40.874 00:27:40.874 Latency(us) 00:27:40.874 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:40.874 =================================================================================================================== 00:27:40.874 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:40.874 08:24:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:40.874 08:24:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:40.874 08:24:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 62092' 00:27:40.874 08:24:14 -- common/autotest_common.sh@945 -- # kill 62092 00:27:40.874 08:24:14 -- common/autotest_common.sh@950 -- # wait 62092 00:27:41.134 08:24:14 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:27:41.134 08:24:14 -- target/queue_depth.sh@43 -- # nvmftestfini 00:27:41.134 08:24:14 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:41.134 08:24:14 -- nvmf/common.sh@116 -- # sync 00:27:41.393 08:24:14 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:41.393 08:24:14 -- nvmf/common.sh@119 -- # set +e 00:27:41.393 08:24:14 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:41.393 08:24:14 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:41.393 rmmod nvme_tcp 00:27:41.393 rmmod nvme_fabrics 00:27:41.393 rmmod nvme_keyring 00:27:41.393 08:24:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:41.393 08:24:14 -- nvmf/common.sh@123 -- # set -e 00:27:41.393 08:24:14 -- nvmf/common.sh@124 -- # return 0 00:27:41.393 08:24:14 -- nvmf/common.sh@477 -- # '[' -n 62059 ']' 00:27:41.393 08:24:14 -- nvmf/common.sh@478 -- # killprocess 62059 00:27:41.393 08:24:14 -- common/autotest_common.sh@926 -- # '[' -z 62059 ']' 00:27:41.393 08:24:14 -- common/autotest_common.sh@930 -- # kill -0 62059 00:27:41.393 08:24:14 -- common/autotest_common.sh@931 -- # uname 00:27:41.393 08:24:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:41.393 08:24:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 62059 00:27:41.393 killing process with pid 62059 00:27:41.393 08:24:14 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:27:41.393 08:24:14 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:27:41.393 08:24:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 62059' 00:27:41.393 08:24:14 -- common/autotest_common.sh@945 -- # kill 62059 00:27:41.393 08:24:14 -- common/autotest_common.sh@950 -- # wait 62059 00:27:41.652 08:24:14 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:41.652 08:24:14 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:41.652 08:24:14 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:41.652 08:24:14 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:41.652 08:24:14 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:41.652 08:24:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:41.652 08:24:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:41.652 08:24:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:41.652 08:24:14 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:27:41.652 00:27:41.652 real 0m13.488s 00:27:41.652 user 0m23.504s 00:27:41.652 sys 0m1.869s 00:27:41.652 08:24:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:41.652 08:24:14 -- common/autotest_common.sh@10 -- # set +x 00:27:41.652 ************************************ 00:27:41.652 END TEST nvmf_queue_depth 00:27:41.652 ************************************ 00:27:41.652 08:24:14 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:27:41.652 08:24:14 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:41.652 08:24:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:41.652 08:24:14 -- common/autotest_common.sh@10 -- # set +x 00:27:41.652 ************************************ 00:27:41.652 START TEST nvmf_multipath 00:27:41.652 ************************************ 00:27:41.652 08:24:14 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:27:41.910 * Looking for test storage... 00:27:41.910 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:27:41.910 08:24:15 -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:41.910 08:24:15 -- nvmf/common.sh@7 -- # uname -s 00:27:41.910 08:24:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:41.910 08:24:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:41.910 08:24:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:41.910 08:24:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:41.910 08:24:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:41.910 08:24:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:41.910 08:24:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:41.910 08:24:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:41.910 08:24:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:41.910 08:24:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:41.910 08:24:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ce38300-f67f-48af-81f9-d51a7c54746d 00:27:41.910 08:24:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ce38300-f67f-48af-81f9-d51a7c54746d 00:27:41.910 08:24:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:41.910 08:24:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:41.910 08:24:15 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:41.910 08:24:15 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:41.910 08:24:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:41.910 08:24:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:41.910 08:24:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:41.910 08:24:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:41.910 08:24:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:41.910 08:24:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:41.910 08:24:15 -- paths/export.sh@5 -- # export PATH 00:27:41.910 08:24:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:41.910 08:24:15 -- nvmf/common.sh@46 -- # : 0 00:27:41.910 08:24:15 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:41.910 08:24:15 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:41.910 08:24:15 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:41.910 08:24:15 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:41.910 08:24:15 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:41.910 08:24:15 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:41.910 08:24:15 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:41.910 08:24:15 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:41.910 08:24:15 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:41.910 08:24:15 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:41.910 08:24:15 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:27:41.910 08:24:15 -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:41.911 08:24:15 -- target/multipath.sh@43 -- # nvmftestinit 00:27:41.911 08:24:15 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:41.911 08:24:15 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:41.911 08:24:15 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:41.911 08:24:15 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:41.911 08:24:15 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:41.911 08:24:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:41.911 08:24:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:41.911 08:24:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:41.911 08:24:15 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:27:41.911 08:24:15 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:27:41.911 08:24:15 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:27:41.911 08:24:15 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:27:41.911 08:24:15 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:27:41.911 08:24:15 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:27:41.911 08:24:15 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:41.911 08:24:15 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:41.911 08:24:15 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:27:41.911 08:24:15 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:27:41.911 08:24:15 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:41.911 08:24:15 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:41.911 08:24:15 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:41.911 08:24:15 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:41.911 08:24:15 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:41.911 08:24:15 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:41.911 08:24:15 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:41.911 08:24:15 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:41.911 08:24:15 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:27:41.911 08:24:15 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:27:41.911 Cannot find device "nvmf_tgt_br" 00:27:41.911 08:24:15 -- nvmf/common.sh@154 -- # true 00:27:41.911 08:24:15 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:27:41.911 Cannot find device "nvmf_tgt_br2" 00:27:41.911 08:24:15 -- nvmf/common.sh@155 -- # true 00:27:41.911 08:24:15 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:27:41.911 08:24:15 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:27:41.911 Cannot find device "nvmf_tgt_br" 00:27:41.911 08:24:15 -- nvmf/common.sh@157 -- # true 00:27:41.911 08:24:15 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:27:41.911 Cannot find device "nvmf_tgt_br2" 00:27:41.911 08:24:15 -- nvmf/common.sh@158 -- # true 00:27:41.911 08:24:15 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:27:41.911 08:24:15 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:27:41.911 08:24:15 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:41.911 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:41.911 08:24:15 -- nvmf/common.sh@161 -- # true 00:27:41.911 08:24:15 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:41.911 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:41.911 08:24:15 -- nvmf/common.sh@162 -- # true 00:27:41.911 08:24:15 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:27:41.911 08:24:15 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:41.911 08:24:15 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:42.170 08:24:15 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:42.170 08:24:15 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:42.170 08:24:15 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:42.170 08:24:15 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:42.170 08:24:15 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:27:42.170 08:24:15 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:27:42.170 08:24:15 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:27:42.170 08:24:15 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:27:42.170 08:24:15 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:27:42.170 08:24:15 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:27:42.170 08:24:15 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:42.170 08:24:15 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:42.170 08:24:15 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:42.170 08:24:15 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:27:42.170 08:24:15 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:27:42.170 08:24:15 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:27:42.170 08:24:15 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:42.170 08:24:15 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:42.170 08:24:15 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:42.170 08:24:15 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:42.170 08:24:15 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:27:42.170 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:42.170 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.140 ms 00:27:42.170 00:27:42.170 --- 10.0.0.2 ping statistics --- 00:27:42.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:42.170 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:27:42.170 08:24:15 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:27:42.170 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:42.170 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.118 ms 00:27:42.170 00:27:42.170 --- 10.0.0.3 ping statistics --- 00:27:42.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:42.170 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:27:42.170 08:24:15 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:42.170 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:42.170 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.055 ms 00:27:42.170 00:27:42.170 --- 10.0.0.1 ping statistics --- 00:27:42.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:42.170 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:27:42.170 08:24:15 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:42.170 08:24:15 -- nvmf/common.sh@421 -- # return 0 00:27:42.170 08:24:15 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:42.170 08:24:15 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:42.170 08:24:15 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:42.170 08:24:15 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:42.170 08:24:15 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:42.170 08:24:15 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:42.170 08:24:15 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:42.170 08:24:15 -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:27:42.170 08:24:15 -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:27:42.170 08:24:15 -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:27:42.170 08:24:15 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:42.170 08:24:15 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:42.170 08:24:15 -- common/autotest_common.sh@10 -- # set +x 00:27:42.170 08:24:15 -- nvmf/common.sh@469 -- # nvmfpid=62411 00:27:42.170 08:24:15 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:42.170 08:24:15 -- nvmf/common.sh@470 -- # waitforlisten 62411 00:27:42.170 08:24:15 -- common/autotest_common.sh@819 -- # '[' -z 62411 ']' 00:27:42.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:42.170 08:24:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:42.170 08:24:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:42.170 08:24:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:42.170 08:24:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:42.170 08:24:15 -- common/autotest_common.sh@10 -- # set +x 00:27:42.428 [2024-04-17 08:24:15.526334] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:27:42.428 [2024-04-17 08:24:15.526439] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:42.428 [2024-04-17 08:24:15.664151] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:42.688 [2024-04-17 08:24:15.767269] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:42.688 [2024-04-17 08:24:15.767499] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:42.688 [2024-04-17 08:24:15.767541] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:42.688 [2024-04-17 08:24:15.767599] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:42.688 [2024-04-17 08:24:15.767769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:42.688 [2024-04-17 08:24:15.767997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:42.688 [2024-04-17 08:24:15.767917] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:42.688 [2024-04-17 08:24:15.768003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:43.256 08:24:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:43.256 08:24:16 -- common/autotest_common.sh@852 -- # return 0 00:27:43.256 08:24:16 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:43.256 08:24:16 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:43.256 08:24:16 -- common/autotest_common.sh@10 -- # set +x 00:27:43.256 08:24:16 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:43.256 08:24:16 -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:43.530 [2024-04-17 08:24:16.647444] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:43.530 08:24:16 -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:27:43.791 Malloc0 00:27:43.791 08:24:16 -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:27:43.791 08:24:17 -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:44.050 08:24:17 -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:44.309 [2024-04-17 08:24:17.467949] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:44.309 08:24:17 -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:27:44.567 [2024-04-17 08:24:17.695803] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:44.567 08:24:17 -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2ce38300-f67f-48af-81f9-d51a7c54746d --hostid=2ce38300-f67f-48af-81f9-d51a7c54746d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:27:44.567 08:24:17 -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2ce38300-f67f-48af-81f9-d51a7c54746d --hostid=2ce38300-f67f-48af-81f9-d51a7c54746d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:27:44.824 08:24:17 -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:27:44.824 08:24:17 -- common/autotest_common.sh@1177 -- # local i=0 00:27:44.824 08:24:17 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:27:44.824 08:24:17 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:27:44.824 08:24:17 -- common/autotest_common.sh@1184 -- # sleep 2 00:27:46.726 08:24:19 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:27:46.726 08:24:19 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:27:46.726 08:24:19 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:27:46.726 08:24:20 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:27:46.726 08:24:20 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:27:46.726 08:24:20 -- common/autotest_common.sh@1187 -- # return 0 00:27:46.726 08:24:20 -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:27:46.726 08:24:20 -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:27:46.726 08:24:20 -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:27:46.726 08:24:20 -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:27:46.726 08:24:20 -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:27:46.726 08:24:20 -- target/multipath.sh@38 -- # echo nvme-subsys0 00:27:46.726 08:24:20 -- target/multipath.sh@38 -- # return 0 00:27:46.726 08:24:20 -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:27:46.726 08:24:20 -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:27:46.726 08:24:20 -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:27:46.726 08:24:20 -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:27:46.726 08:24:20 -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:27:46.726 08:24:20 -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:27:46.726 08:24:20 -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:27:46.726 08:24:20 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:27:46.726 08:24:20 -- target/multipath.sh@22 -- # local timeout=20 00:27:46.726 08:24:20 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:27:46.726 08:24:20 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:27:46.726 08:24:20 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:27:46.726 08:24:20 -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:27:46.726 08:24:20 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:27:46.726 08:24:20 -- target/multipath.sh@22 -- # local timeout=20 00:27:46.726 08:24:20 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:27:46.726 08:24:20 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:27:46.726 08:24:20 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:27:46.726 08:24:20 -- target/multipath.sh@85 -- # echo numa 00:27:46.726 08:24:20 -- target/multipath.sh@88 -- # fio_pid=62496 00:27:46.726 08:24:20 -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:27:46.726 08:24:20 -- target/multipath.sh@90 -- # sleep 1 00:27:46.726 [global] 00:27:46.726 thread=1 00:27:46.726 invalidate=1 00:27:46.726 rw=randrw 00:27:46.726 time_based=1 00:27:46.726 runtime=6 00:27:46.726 ioengine=libaio 00:27:46.726 direct=1 00:27:46.726 bs=4096 00:27:46.726 iodepth=128 00:27:46.726 norandommap=0 00:27:46.726 numjobs=1 00:27:46.727 00:27:46.727 verify_dump=1 00:27:46.727 verify_backlog=512 00:27:46.727 verify_state_save=0 00:27:46.727 do_verify=1 00:27:46.727 verify=crc32c-intel 00:27:46.727 [job0] 00:27:46.727 filename=/dev/nvme0n1 00:27:46.986 Could not set queue depth (nvme0n1) 00:27:46.986 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:27:46.986 fio-3.35 00:27:46.986 Starting 1 thread 00:27:47.921 08:24:21 -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:27:48.180 08:24:21 -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:27:48.180 08:24:21 -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:27:48.180 08:24:21 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:27:48.180 08:24:21 -- target/multipath.sh@22 -- # local timeout=20 00:27:48.180 08:24:21 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:27:48.180 08:24:21 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:27:48.180 08:24:21 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:27:48.180 08:24:21 -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:27:48.180 08:24:21 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:27:48.180 08:24:21 -- target/multipath.sh@22 -- # local timeout=20 00:27:48.180 08:24:21 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:27:48.180 08:24:21 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:27:48.180 08:24:21 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:27:48.180 08:24:21 -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:48.438 08:24:21 -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:27:48.698 08:24:21 -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:27:48.698 08:24:21 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:27:48.698 08:24:21 -- target/multipath.sh@22 -- # local timeout=20 00:27:48.698 08:24:21 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:27:48.698 08:24:21 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:27:48.698 08:24:21 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:27:48.698 08:24:21 -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:27:48.698 08:24:21 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:27:48.698 08:24:21 -- target/multipath.sh@22 -- # local timeout=20 00:27:48.698 08:24:21 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:27:48.698 08:24:21 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:27:48.698 08:24:21 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:27:48.698 08:24:21 -- target/multipath.sh@104 -- # wait 62496 00:27:53.975 00:27:53.975 job0: (groupid=0, jobs=1): err= 0: pid=62522: Wed Apr 17 08:24:26 2024 00:27:53.975 read: IOPS=11.6k, BW=45.3MiB/s (47.5MB/s)(272MiB/6007msec) 00:27:53.975 slat (usec): min=4, max=6107, avg=46.87, stdev=193.20 00:27:53.975 clat (usec): min=571, max=17425, avg=7490.28, stdev=1509.17 00:27:53.975 lat (usec): min=584, max=17437, avg=7537.16, stdev=1516.10 00:27:53.975 clat percentiles (usec): 00:27:53.975 | 1.00th=[ 4178], 5.00th=[ 5276], 10.00th=[ 5997], 20.00th=[ 6587], 00:27:53.975 | 30.00th=[ 6915], 40.00th=[ 7111], 50.00th=[ 7308], 60.00th=[ 7570], 00:27:53.975 | 70.00th=[ 7832], 80.00th=[ 8160], 90.00th=[ 9241], 95.00th=[10814], 00:27:53.975 | 99.00th=[12125], 99.50th=[12780], 99.90th=[14615], 99.95th=[15926], 00:27:53.975 | 99.99th=[16909] 00:27:53.975 bw ( KiB/s): min=18416, max=30608, per=53.42%, avg=24776.73, stdev=4377.83, samples=11 00:27:53.975 iops : min= 4604, max= 7652, avg=6194.18, stdev=1094.46, samples=11 00:27:53.975 write: IOPS=6768, BW=26.4MiB/s (27.7MB/s)(145MiB/5479msec); 0 zone resets 00:27:53.975 slat (usec): min=9, max=3206, avg=61.76, stdev=121.62 00:27:53.975 clat (usec): min=402, max=17297, avg=6475.68, stdev=1334.48 00:27:53.975 lat (usec): min=482, max=17349, avg=6537.44, stdev=1340.34 00:27:53.975 clat percentiles (usec): 00:27:53.975 | 1.00th=[ 3228], 5.00th=[ 4178], 10.00th=[ 4686], 20.00th=[ 5604], 00:27:53.975 | 30.00th=[ 6063], 40.00th=[ 6325], 50.00th=[ 6587], 60.00th=[ 6783], 00:27:53.975 | 70.00th=[ 6980], 80.00th=[ 7242], 90.00th=[ 7767], 95.00th=[ 8356], 00:27:53.975 | 99.00th=[10683], 99.50th=[11338], 99.90th=[12780], 99.95th=[13435], 00:27:53.975 | 99.99th=[17171] 00:27:53.975 bw ( KiB/s): min=18280, max=29856, per=91.41%, avg=24748.36, stdev=4052.65, samples=11 00:27:53.975 iops : min= 4570, max= 7464, avg=6187.09, stdev=1013.16, samples=11 00:27:53.975 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:27:53.976 lat (msec) : 2=0.10%, 4=1.64%, 10=92.45%, 20=5.80% 00:27:53.976 cpu : usr=6.33%, sys=30.25%, ctx=6400, majf=0, minf=96 00:27:53.976 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:27:53.976 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:53.976 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:53.976 issued rwts: total=69646,37084,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:53.976 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:53.976 00:27:53.976 Run status group 0 (all jobs): 00:27:53.976 READ: bw=45.3MiB/s (47.5MB/s), 45.3MiB/s-45.3MiB/s (47.5MB/s-47.5MB/s), io=272MiB (285MB), run=6007-6007msec 00:27:53.976 WRITE: bw=26.4MiB/s (27.7MB/s), 26.4MiB/s-26.4MiB/s (27.7MB/s-27.7MB/s), io=145MiB (152MB), run=5479-5479msec 00:27:53.976 00:27:53.976 Disk stats (read/write): 00:27:53.976 nvme0n1: ios=68794/36510, merge=0/0, ticks=472350/211803, in_queue=684153, util=98.66% 00:27:53.976 08:24:26 -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:27:53.976 08:24:26 -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:27:53.976 08:24:26 -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:27:53.976 08:24:26 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:27:53.976 08:24:26 -- target/multipath.sh@22 -- # local timeout=20 00:27:53.976 08:24:26 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:27:53.976 08:24:26 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:27:53.976 08:24:26 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:27:53.976 08:24:26 -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:27:53.976 08:24:26 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:27:53.976 08:24:26 -- target/multipath.sh@22 -- # local timeout=20 00:27:53.976 08:24:26 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:27:53.976 08:24:26 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:27:53.976 08:24:26 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:27:53.976 08:24:26 -- target/multipath.sh@113 -- # echo round-robin 00:27:53.976 08:24:26 -- target/multipath.sh@116 -- # fio_pid=62604 00:27:53.976 08:24:26 -- target/multipath.sh@118 -- # sleep 1 00:27:53.976 08:24:26 -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:27:53.976 [global] 00:27:53.976 thread=1 00:27:53.976 invalidate=1 00:27:53.976 rw=randrw 00:27:53.976 time_based=1 00:27:53.976 runtime=6 00:27:53.976 ioengine=libaio 00:27:53.976 direct=1 00:27:53.976 bs=4096 00:27:53.976 iodepth=128 00:27:53.976 norandommap=0 00:27:53.976 numjobs=1 00:27:53.976 00:27:53.976 verify_dump=1 00:27:53.976 verify_backlog=512 00:27:53.976 verify_state_save=0 00:27:53.976 do_verify=1 00:27:53.976 verify=crc32c-intel 00:27:53.976 [job0] 00:27:53.976 filename=/dev/nvme0n1 00:27:53.976 Could not set queue depth (nvme0n1) 00:27:53.976 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:27:53.976 fio-3.35 00:27:53.976 Starting 1 thread 00:27:54.559 08:24:27 -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:27:54.818 08:24:28 -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:27:55.077 08:24:28 -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:27:55.077 08:24:28 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:27:55.077 08:24:28 -- target/multipath.sh@22 -- # local timeout=20 00:27:55.077 08:24:28 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:27:55.077 08:24:28 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:27:55.077 08:24:28 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:27:55.077 08:24:28 -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:27:55.077 08:24:28 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:27:55.077 08:24:28 -- target/multipath.sh@22 -- # local timeout=20 00:27:55.077 08:24:28 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:27:55.077 08:24:28 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:27:55.077 08:24:28 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:27:55.077 08:24:28 -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:55.336 08:24:28 -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:27:55.595 08:24:28 -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:27:55.595 08:24:28 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:27:55.595 08:24:28 -- target/multipath.sh@22 -- # local timeout=20 00:27:55.595 08:24:28 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:27:55.595 08:24:28 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:27:55.595 08:24:28 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:27:55.595 08:24:28 -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:27:55.595 08:24:28 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:27:55.595 08:24:28 -- target/multipath.sh@22 -- # local timeout=20 00:27:55.595 08:24:28 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:27:55.595 08:24:28 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:27:55.595 08:24:28 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:27:55.595 08:24:28 -- target/multipath.sh@132 -- # wait 62604 00:28:00.868 00:28:00.868 job0: (groupid=0, jobs=1): err= 0: pid=62626: Wed Apr 17 08:24:33 2024 00:28:00.868 read: IOPS=13.0k, BW=50.9MiB/s (53.3MB/s)(305MiB/6002msec) 00:28:00.868 slat (usec): min=4, max=5293, avg=37.56, stdev=156.05 00:28:00.868 clat (usec): min=258, max=17756, avg=6790.10, stdev=1887.63 00:28:00.868 lat (usec): min=269, max=17774, avg=6827.66, stdev=1894.93 00:28:00.868 clat percentiles (usec): 00:28:00.868 | 1.00th=[ 1696], 5.00th=[ 3228], 10.00th=[ 4490], 20.00th=[ 5669], 00:28:00.868 | 30.00th=[ 6259], 40.00th=[ 6587], 50.00th=[ 6915], 60.00th=[ 7177], 00:28:00.868 | 70.00th=[ 7439], 80.00th=[ 7767], 90.00th=[ 8717], 95.00th=[10290], 00:28:00.868 | 99.00th=[11731], 99.50th=[12387], 99.90th=[15008], 99.95th=[16188], 00:28:00.868 | 99.99th=[17433] 00:28:00.868 bw ( KiB/s): min=15792, max=41540, per=51.49%, avg=26819.27, stdev=6900.10, samples=11 00:28:00.868 iops : min= 3948, max=10385, avg=6704.64, stdev=1725.02, samples=11 00:28:00.868 write: IOPS=7417, BW=29.0MiB/s (30.4MB/s)(156MiB/5377msec); 0 zone resets 00:28:00.868 slat (usec): min=10, max=1857, avg=52.78, stdev=92.14 00:28:00.868 clat (usec): min=366, max=13073, avg=5772.10, stdev=1537.45 00:28:00.868 lat (usec): min=390, max=13104, avg=5824.88, stdev=1544.32 00:28:00.868 clat percentiles (usec): 00:28:00.868 | 1.00th=[ 1663], 5.00th=[ 3032], 10.00th=[ 3785], 20.00th=[ 4555], 00:28:00.868 | 30.00th=[ 5145], 40.00th=[ 5669], 50.00th=[ 5997], 60.00th=[ 6259], 00:28:00.868 | 70.00th=[ 6521], 80.00th=[ 6849], 90.00th=[ 7242], 95.00th=[ 7832], 00:28:00.868 | 99.00th=[10028], 99.50th=[10552], 99.90th=[11994], 99.95th=[12518], 00:28:00.868 | 99.99th=[12911] 00:28:00.868 bw ( KiB/s): min=16632, max=40790, per=90.14%, avg=26745.64, stdev=6565.66, samples=11 00:28:00.868 iops : min= 4158, max=10197, avg=6686.27, stdev=1641.28, samples=11 00:28:00.868 lat (usec) : 500=0.04%, 750=0.08%, 1000=0.15% 00:28:00.868 lat (msec) : 2=1.19%, 4=7.63%, 10=86.30%, 20=4.61% 00:28:00.868 cpu : usr=6.58%, sys=31.63%, ctx=7661, majf=0, minf=72 00:28:00.868 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:28:00.869 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:00.869 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:00.869 issued rwts: total=78160,39885,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:00.869 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:00.869 00:28:00.869 Run status group 0 (all jobs): 00:28:00.869 READ: bw=50.9MiB/s (53.3MB/s), 50.9MiB/s-50.9MiB/s (53.3MB/s-53.3MB/s), io=305MiB (320MB), run=6002-6002msec 00:28:00.869 WRITE: bw=29.0MiB/s (30.4MB/s), 29.0MiB/s-29.0MiB/s (30.4MB/s-30.4MB/s), io=156MiB (163MB), run=5377-5377msec 00:28:00.869 00:28:00.869 Disk stats (read/write): 00:28:00.869 nvme0n1: ios=77081/39373, merge=0/0, ticks=475786/198703, in_queue=674489, util=98.70% 00:28:00.869 08:24:33 -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:28:00.869 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:28:00.869 08:24:33 -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:28:00.869 08:24:33 -- common/autotest_common.sh@1198 -- # local i=0 00:28:00.869 08:24:33 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:28:00.869 08:24:33 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:00.869 08:24:33 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:28:00.869 08:24:33 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:00.869 08:24:33 -- common/autotest_common.sh@1210 -- # return 0 00:28:00.869 08:24:33 -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:00.869 08:24:33 -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:28:00.869 08:24:33 -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:28:00.869 08:24:33 -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:28:00.869 08:24:33 -- target/multipath.sh@144 -- # nvmftestfini 00:28:00.869 08:24:33 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:00.869 08:24:33 -- nvmf/common.sh@116 -- # sync 00:28:00.869 08:24:33 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:28:00.869 08:24:33 -- nvmf/common.sh@119 -- # set +e 00:28:00.869 08:24:33 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:00.869 08:24:33 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:28:00.869 rmmod nvme_tcp 00:28:00.869 rmmod nvme_fabrics 00:28:00.869 rmmod nvme_keyring 00:28:00.869 08:24:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:00.869 08:24:33 -- nvmf/common.sh@123 -- # set -e 00:28:00.869 08:24:33 -- nvmf/common.sh@124 -- # return 0 00:28:00.869 08:24:33 -- nvmf/common.sh@477 -- # '[' -n 62411 ']' 00:28:00.869 08:24:33 -- nvmf/common.sh@478 -- # killprocess 62411 00:28:00.869 08:24:33 -- common/autotest_common.sh@926 -- # '[' -z 62411 ']' 00:28:00.869 08:24:33 -- common/autotest_common.sh@930 -- # kill -0 62411 00:28:00.869 08:24:33 -- common/autotest_common.sh@931 -- # uname 00:28:00.869 08:24:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:00.869 08:24:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 62411 00:28:00.869 killing process with pid 62411 00:28:00.869 08:24:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:00.869 08:24:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:00.869 08:24:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 62411' 00:28:00.869 08:24:33 -- common/autotest_common.sh@945 -- # kill 62411 00:28:00.869 08:24:33 -- common/autotest_common.sh@950 -- # wait 62411 00:28:00.869 08:24:34 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:00.869 08:24:34 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:00.869 08:24:34 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:00.869 08:24:34 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:00.869 08:24:34 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:00.869 08:24:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:00.869 08:24:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:00.869 08:24:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:00.869 08:24:34 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:28:00.869 00:28:00.869 real 0m19.155s 00:28:00.869 user 1m12.262s 00:28:00.869 sys 0m9.424s 00:28:00.869 08:24:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:00.869 ************************************ 00:28:00.869 END TEST nvmf_multipath 00:28:00.869 ************************************ 00:28:00.869 08:24:34 -- common/autotest_common.sh@10 -- # set +x 00:28:00.869 08:24:34 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:28:00.869 08:24:34 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:00.869 08:24:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:00.869 08:24:34 -- common/autotest_common.sh@10 -- # set +x 00:28:00.869 ************************************ 00:28:00.869 START TEST nvmf_zcopy 00:28:00.869 ************************************ 00:28:00.869 08:24:34 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:28:01.141 * Looking for test storage... 00:28:01.141 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:28:01.141 08:24:34 -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:01.141 08:24:34 -- nvmf/common.sh@7 -- # uname -s 00:28:01.141 08:24:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:01.142 08:24:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:01.142 08:24:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:01.142 08:24:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:01.142 08:24:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:01.142 08:24:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:01.142 08:24:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:01.142 08:24:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:01.142 08:24:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:01.142 08:24:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:01.142 08:24:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ce38300-f67f-48af-81f9-d51a7c54746d 00:28:01.142 08:24:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ce38300-f67f-48af-81f9-d51a7c54746d 00:28:01.142 08:24:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:01.142 08:24:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:01.142 08:24:34 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:01.142 08:24:34 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:01.142 08:24:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:01.142 08:24:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:01.142 08:24:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:01.142 08:24:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:01.142 08:24:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:01.142 08:24:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:01.142 08:24:34 -- paths/export.sh@5 -- # export PATH 00:28:01.142 08:24:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:01.142 08:24:34 -- nvmf/common.sh@46 -- # : 0 00:28:01.142 08:24:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:01.142 08:24:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:01.142 08:24:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:01.142 08:24:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:01.142 08:24:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:01.142 08:24:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:01.142 08:24:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:01.142 08:24:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:01.142 08:24:34 -- target/zcopy.sh@12 -- # nvmftestinit 00:28:01.142 08:24:34 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:01.142 08:24:34 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:01.142 08:24:34 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:01.142 08:24:34 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:01.142 08:24:34 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:01.142 08:24:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:01.142 08:24:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:01.142 08:24:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:01.142 08:24:34 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:28:01.142 08:24:34 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:28:01.142 08:24:34 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:28:01.142 08:24:34 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:28:01.142 08:24:34 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:28:01.142 08:24:34 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:28:01.142 08:24:34 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:01.142 08:24:34 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:01.142 08:24:34 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:28:01.142 08:24:34 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:28:01.142 08:24:34 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:01.142 08:24:34 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:01.142 08:24:34 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:01.142 08:24:34 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:01.142 08:24:34 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:01.142 08:24:34 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:01.142 08:24:34 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:01.142 08:24:34 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:01.142 08:24:34 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:28:01.142 08:24:34 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:28:01.142 Cannot find device "nvmf_tgt_br" 00:28:01.142 08:24:34 -- nvmf/common.sh@154 -- # true 00:28:01.142 08:24:34 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:28:01.142 Cannot find device "nvmf_tgt_br2" 00:28:01.142 08:24:34 -- nvmf/common.sh@155 -- # true 00:28:01.142 08:24:34 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:28:01.142 08:24:34 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:28:01.142 Cannot find device "nvmf_tgt_br" 00:28:01.142 08:24:34 -- nvmf/common.sh@157 -- # true 00:28:01.142 08:24:34 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:28:01.142 Cannot find device "nvmf_tgt_br2" 00:28:01.142 08:24:34 -- nvmf/common.sh@158 -- # true 00:28:01.142 08:24:34 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:28:01.142 08:24:34 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:28:01.142 08:24:34 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:01.142 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:01.142 08:24:34 -- nvmf/common.sh@161 -- # true 00:28:01.142 08:24:34 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:01.142 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:01.142 08:24:34 -- nvmf/common.sh@162 -- # true 00:28:01.142 08:24:34 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:28:01.142 08:24:34 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:01.142 08:24:34 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:01.142 08:24:34 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:01.142 08:24:34 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:01.142 08:24:34 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:01.407 08:24:34 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:01.407 08:24:34 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:28:01.407 08:24:34 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:28:01.407 08:24:34 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:28:01.407 08:24:34 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:28:01.407 08:24:34 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:28:01.407 08:24:34 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:28:01.407 08:24:34 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:01.407 08:24:34 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:01.407 08:24:34 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:01.407 08:24:34 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:28:01.407 08:24:34 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:28:01.407 08:24:34 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:28:01.407 08:24:34 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:01.407 08:24:34 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:01.407 08:24:34 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:01.407 08:24:34 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:01.407 08:24:34 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:28:01.407 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:01.407 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.127 ms 00:28:01.407 00:28:01.407 --- 10.0.0.2 ping statistics --- 00:28:01.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:01.408 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:28:01.408 08:24:34 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:28:01.408 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:01.408 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:28:01.408 00:28:01.408 --- 10.0.0.3 ping statistics --- 00:28:01.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:01.408 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:28:01.408 08:24:34 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:01.408 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:01.408 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:28:01.408 00:28:01.408 --- 10.0.0.1 ping statistics --- 00:28:01.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:01.408 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:28:01.408 08:24:34 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:01.408 08:24:34 -- nvmf/common.sh@421 -- # return 0 00:28:01.408 08:24:34 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:01.408 08:24:34 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:01.408 08:24:34 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:01.408 08:24:34 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:01.408 08:24:34 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:01.408 08:24:34 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:01.408 08:24:34 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:01.408 08:24:34 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:28:01.408 08:24:34 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:01.408 08:24:34 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:01.408 08:24:34 -- common/autotest_common.sh@10 -- # set +x 00:28:01.408 08:24:34 -- nvmf/common.sh@469 -- # nvmfpid=62870 00:28:01.408 08:24:34 -- nvmf/common.sh@470 -- # waitforlisten 62870 00:28:01.408 08:24:34 -- common/autotest_common.sh@819 -- # '[' -z 62870 ']' 00:28:01.408 08:24:34 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:28:01.408 08:24:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:01.408 08:24:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:01.408 08:24:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:01.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:01.408 08:24:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:01.408 08:24:34 -- common/autotest_common.sh@10 -- # set +x 00:28:01.408 [2024-04-17 08:24:34.698392] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:01.408 [2024-04-17 08:24:34.698465] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:01.667 [2024-04-17 08:24:34.835215] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:01.667 [2024-04-17 08:24:34.932043] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:01.667 [2024-04-17 08:24:34.932169] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:01.667 [2024-04-17 08:24:34.932177] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:01.667 [2024-04-17 08:24:34.932182] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:01.668 [2024-04-17 08:24:34.932204] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:02.237 08:24:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:02.237 08:24:35 -- common/autotest_common.sh@852 -- # return 0 00:28:02.237 08:24:35 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:02.237 08:24:35 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:02.237 08:24:35 -- common/autotest_common.sh@10 -- # set +x 00:28:02.237 08:24:35 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:02.237 08:24:35 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:28:02.237 08:24:35 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:28:02.237 08:24:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:02.237 08:24:35 -- common/autotest_common.sh@10 -- # set +x 00:28:02.237 [2024-04-17 08:24:35.564133] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:02.496 08:24:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:02.496 08:24:35 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:02.496 08:24:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:02.496 08:24:35 -- common/autotest_common.sh@10 -- # set +x 00:28:02.496 08:24:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:02.496 08:24:35 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:02.496 08:24:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:02.496 08:24:35 -- common/autotest_common.sh@10 -- # set +x 00:28:02.496 [2024-04-17 08:24:35.588132] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:02.496 08:24:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:02.496 08:24:35 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:02.496 08:24:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:02.496 08:24:35 -- common/autotest_common.sh@10 -- # set +x 00:28:02.496 08:24:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:02.496 08:24:35 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:28:02.496 08:24:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:02.496 08:24:35 -- common/autotest_common.sh@10 -- # set +x 00:28:02.496 malloc0 00:28:02.496 08:24:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:02.496 08:24:35 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:28:02.496 08:24:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:02.496 08:24:35 -- common/autotest_common.sh@10 -- # set +x 00:28:02.496 08:24:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:02.496 08:24:35 -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:28:02.496 08:24:35 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:28:02.496 08:24:35 -- nvmf/common.sh@520 -- # config=() 00:28:02.496 08:24:35 -- nvmf/common.sh@520 -- # local subsystem config 00:28:02.496 08:24:35 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:28:02.496 08:24:35 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:28:02.496 { 00:28:02.496 "params": { 00:28:02.496 "name": "Nvme$subsystem", 00:28:02.496 "trtype": "$TEST_TRANSPORT", 00:28:02.496 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:02.496 "adrfam": "ipv4", 00:28:02.496 "trsvcid": "$NVMF_PORT", 00:28:02.496 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:02.496 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:02.496 "hdgst": ${hdgst:-false}, 00:28:02.496 "ddgst": ${ddgst:-false} 00:28:02.496 }, 00:28:02.496 "method": "bdev_nvme_attach_controller" 00:28:02.496 } 00:28:02.496 EOF 00:28:02.496 )") 00:28:02.496 08:24:35 -- nvmf/common.sh@542 -- # cat 00:28:02.496 08:24:35 -- nvmf/common.sh@544 -- # jq . 00:28:02.496 08:24:35 -- nvmf/common.sh@545 -- # IFS=, 00:28:02.496 08:24:35 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:28:02.496 "params": { 00:28:02.496 "name": "Nvme1", 00:28:02.496 "trtype": "tcp", 00:28:02.496 "traddr": "10.0.0.2", 00:28:02.496 "adrfam": "ipv4", 00:28:02.496 "trsvcid": "4420", 00:28:02.496 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:02.496 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:02.496 "hdgst": false, 00:28:02.496 "ddgst": false 00:28:02.496 }, 00:28:02.496 "method": "bdev_nvme_attach_controller" 00:28:02.496 }' 00:28:02.496 [2024-04-17 08:24:35.673101] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:02.496 [2024-04-17 08:24:35.673163] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62910 ] 00:28:02.496 [2024-04-17 08:24:35.810506] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:02.755 [2024-04-17 08:24:35.907113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:02.755 Running I/O for 10 seconds... 00:28:12.730 00:28:12.730 Latency(us) 00:28:12.730 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:12.730 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:28:12.730 Verification LBA range: start 0x0 length 0x1000 00:28:12.730 Nvme1n1 : 10.01 11427.57 89.28 0.00 0.00 11173.39 1159.04 20261.79 00:28:12.730 =================================================================================================================== 00:28:12.730 Total : 11427.57 89.28 0.00 0.00 11173.39 1159.04 20261.79 00:28:12.988 08:24:46 -- target/zcopy.sh@39 -- # perfpid=63021 00:28:12.988 08:24:46 -- target/zcopy.sh@41 -- # xtrace_disable 00:28:12.988 08:24:46 -- common/autotest_common.sh@10 -- # set +x 00:28:12.988 08:24:46 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:28:12.988 08:24:46 -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:28:12.988 08:24:46 -- nvmf/common.sh@520 -- # config=() 00:28:12.988 08:24:46 -- nvmf/common.sh@520 -- # local subsystem config 00:28:12.988 08:24:46 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:28:12.988 08:24:46 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:28:12.988 { 00:28:12.988 "params": { 00:28:12.988 "name": "Nvme$subsystem", 00:28:12.988 "trtype": "$TEST_TRANSPORT", 00:28:12.988 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:12.988 "adrfam": "ipv4", 00:28:12.988 "trsvcid": "$NVMF_PORT", 00:28:12.988 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:12.988 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:12.988 "hdgst": ${hdgst:-false}, 00:28:12.988 "ddgst": ${ddgst:-false} 00:28:12.988 }, 00:28:12.988 "method": "bdev_nvme_attach_controller" 00:28:12.988 } 00:28:12.988 EOF 00:28:12.988 )") 00:28:12.988 08:24:46 -- nvmf/common.sh@542 -- # cat 00:28:12.988 [2024-04-17 08:24:46.289015] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:12.988 [2024-04-17 08:24:46.289056] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:12.988 08:24:46 -- nvmf/common.sh@544 -- # jq . 00:28:12.988 08:24:46 -- nvmf/common.sh@545 -- # IFS=, 00:28:12.988 08:24:46 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:28:12.988 "params": { 00:28:12.988 "name": "Nvme1", 00:28:12.988 "trtype": "tcp", 00:28:12.988 "traddr": "10.0.0.2", 00:28:12.988 "adrfam": "ipv4", 00:28:12.988 "trsvcid": "4420", 00:28:12.988 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:12.988 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:12.988 "hdgst": false, 00:28:12.988 "ddgst": false 00:28:12.988 }, 00:28:12.988 "method": "bdev_nvme_attach_controller" 00:28:12.988 }' 00:28:12.988 [2024-04-17 08:24:46.300956] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:12.988 [2024-04-17 08:24:46.300979] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:12.988 [2024-04-17 08:24:46.312922] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:12.988 [2024-04-17 08:24:46.312942] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:13.247 [2024-04-17 08:24:46.324915] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:13.247 [2024-04-17 08:24:46.324940] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:13.247 [2024-04-17 08:24:46.336896] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:13.247 [2024-04-17 08:24:46.336922] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:13.247 [2024-04-17 08:24:46.345501] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:13.247 [2024-04-17 08:24:46.345580] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63021 ] 00:28:13.247 [2024-04-17 08:24:46.348871] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:13.247 [2024-04-17 08:24:46.348891] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:13.247 [2024-04-17 08:24:46.360860] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:13.247 [2024-04-17 08:24:46.360889] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:13.247 [2024-04-17 08:24:46.372835] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:13.247 [2024-04-17 08:24:46.372859] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:13.247 [2024-04-17 08:24:46.384807] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:13.247 [2024-04-17 08:24:46.384828] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:13.247 [2024-04-17 08:24:46.396787] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:13.247 [2024-04-17 08:24:46.396807] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:13.247 [2024-04-17 08:24:46.408787] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:13.247 [2024-04-17 08:24:46.408814] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:13.247 [2024-04-17 08:24:46.420756] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:13.247 [2024-04-17 08:24:46.420781] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:13.247 [2024-04-17 08:24:46.432733] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:13.247 [2024-04-17 08:24:46.432754] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:13.247 [2024-04-17 08:24:46.444722] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:13.247 [2024-04-17 08:24:46.444741] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:13.247 [2024-04-17 08:24:46.456706] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:13.247 [2024-04-17 08:24:46.456728] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:13.247 [2024-04-17 08:24:46.468690] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:13.247 [2024-04-17 08:24:46.468717] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:13.247 [2024-04-17 08:24:46.480673] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:13.247 [2024-04-17 08:24:46.480697] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:13.247 [2024-04-17 08:24:46.487486] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:13.247 [2024-04-17 08:24:46.492656] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:13.247 [2024-04-17 08:24:46.492685] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:13.247 [2024-04-17 08:24:46.504636] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:13.247 [2024-04-17 08:24:46.504661] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:13.247 [2024-04-17 08:24:46.516631] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:13.247 [2024-04-17 08:24:46.516673] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:13.247 [2024-04-17 08:24:46.528651] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:13.247 [2024-04-17 08:24:46.528692] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:13.247 [2024-04-17 08:24:46.540579] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:13.247 [2024-04-17 08:24:46.540610] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:13.247 [2024-04-17 08:24:46.552567] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:13.247 [2024-04-17 08:24:46.552602] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:13.247 [2024-04-17 08:24:46.564528] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:13.247 [2024-04-17 08:24:46.564554] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:13.247 [2024-04-17 08:24:46.576513] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:13.247 [2024-04-17 08:24:46.576545] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:13.506 [2024-04-17 08:24:46.588493] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:13.506 [2024-04-17 08:24:46.588525] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:13.506 [2024-04-17 08:24:46.593292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:13.506 [2024-04-17 08:24:46.600475] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:13.506 [2024-04-17 08:24:46.600507] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:13.506 [2024-04-17 08:24:46.612465] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:13.506 [2024-04-17 08:24:46.612502] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:13.506 [2024-04-17 08:24:46.624435] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:13.506 [2024-04-17 08:24:46.624470] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:13.506 [2024-04-17 08:24:46.636420] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:13.506 [2024-04-17 08:24:46.636456] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:13.506 [2024-04-17 08:24:46.648397] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:13.506 [2024-04-17 08:24:46.648424] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:13.506 [2024-04-17 08:24:46.660376] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:13.506 [2024-04-17 08:24:46.660400] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:13.506 [2024-04-17 08:24:46.672349] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:13.506 [2024-04-17 08:24:46.672370] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:13.506 [2024-04-17 08:24:46.684353] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:13.506 [2024-04-17 08:24:46.684386] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:13.506 [2024-04-17 08:24:46.696338] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:13.506 [2024-04-17 08:24:46.696367] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:13.506 [2024-04-17 08:24:46.708350] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:13.506 [2024-04-17 08:24:46.708377] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:13.506 [2024-04-17 08:24:46.720321] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:13.506 [2024-04-17 08:24:46.720349] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:13.506 [2024-04-17 08:24:46.732309] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:13.506 [2024-04-17 08:24:46.732342] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:13.506 [2024-04-17 08:24:46.744302] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:13.506 [2024-04-17 08:24:46.744340] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:13.506 Running I/O for 5 seconds... 00:28:13.506 [2024-04-17 08:24:46.756278] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:13.506 [2024-04-17 08:24:46.756300] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:13.506 [2024-04-17 08:24:46.771781] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:13.506 [2024-04-17 08:24:46.771815] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:13.506 [2024-04-17 08:24:46.785985] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:13.506 [2024-04-17 08:24:46.786018] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:13.506 [2024-04-17 08:24:46.799552] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:13.506 [2024-04-17 08:24:46.799603] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:13.506 [2024-04-17 08:24:46.814668] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:13.506 [2024-04-17 08:24:46.814704] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:13.506 [2024-04-17 08:24:46.831457] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:13.506 [2024-04-17 08:24:46.831493] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:13.766 [2024-04-17 08:24:46.847525] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:13.766 [2024-04-17 08:24:46.847563] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:13.766 [2024-04-17 08:24:46.863498] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:13.766 [2024-04-17 08:24:46.863536] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:13.766 [2024-04-17 08:24:46.874121] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:13.766 [2024-04-17 08:24:46.874158] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:13.766 [2024-04-17 08:24:46.889737] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:13.766 [2024-04-17 08:24:46.889772] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:13.766 [2024-04-17 08:24:46.905538] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:13.766 [2024-04-17 08:24:46.905576] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:13.766 [2024-04-17 08:24:46.919951] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:13.766 [2024-04-17 08:24:46.919984] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:13.766 [2024-04-17 08:24:46.934565] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:13.766 [2024-04-17 08:24:46.934600] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:13.766 [2024-04-17 08:24:46.945464] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:13.766 [2024-04-17 08:24:46.945501] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:13.766 [2024-04-17 08:24:46.960509] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:13.766 [2024-04-17 08:24:46.960545] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:13.766 [2024-04-17 08:24:46.971872] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:13.766 [2024-04-17 08:24:46.971906] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:13.766 [2024-04-17 08:24:46.987631] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:13.766 [2024-04-17 08:24:46.987665] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:13.766 [2024-04-17 08:24:47.002529] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:13.766 [2024-04-17 08:24:47.002564] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:13.766 [2024-04-17 08:24:47.017765] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:13.766 [2024-04-17 08:24:47.017801] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:13.766 [2024-04-17 08:24:47.033859] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:13.766 [2024-04-17 08:24:47.033891] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:13.766 [2024-04-17 08:24:47.049964] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:13.766 [2024-04-17 08:24:47.050003] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:13.766 [2024-04-17 08:24:47.061024] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:13.766 [2024-04-17 08:24:47.061061] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:13.766 [2024-04-17 08:24:47.076696] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:13.766 [2024-04-17 08:24:47.076729] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:13.766 [2024-04-17 08:24:47.092850] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:13.766 [2024-04-17 08:24:47.092884] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:14.028 [2024-04-17 08:24:47.109113] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:14.028 [2024-04-17 08:24:47.109147] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:14.028 [2024-04-17 08:24:47.125462] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:14.028 [2024-04-17 08:24:47.125495] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:14.028 [2024-04-17 08:24:47.141822] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:14.028 [2024-04-17 08:24:47.141860] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:14.028 [2024-04-17 08:24:47.158007] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:14.028 [2024-04-17 08:24:47.158042] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:14.028 [2024-04-17 08:24:47.168346] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:14.028 [2024-04-17 08:24:47.168379] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:14.028 [2024-04-17 08:24:47.183804] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:14.028 [2024-04-17 08:24:47.183836] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:14.028 [2024-04-17 08:24:47.199691] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:14.028 [2024-04-17 08:24:47.199727] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:14.028 [2024-04-17 08:24:47.210694] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:14.028 [2024-04-17 08:24:47.210729] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:14.028 [2024-04-17 08:24:47.226252] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:14.028 [2024-04-17 08:24:47.226291] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:14.028 [2024-04-17 08:24:47.242802] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:14.028 [2024-04-17 08:24:47.242837] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:14.028 [2024-04-17 08:24:47.258812] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:14.028 [2024-04-17 08:24:47.258851] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:14.028 [2024-04-17 08:24:47.273283] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:14.028 [2024-04-17 08:24:47.273329] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:14.028 [2024-04-17 08:24:47.284699] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:14.028 [2024-04-17 08:24:47.284730] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:14.028 [2024-04-17 08:24:47.299672] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:14.028 [2024-04-17 08:24:47.299708] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:14.028 [2024-04-17 08:24:47.316176] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:14.028 [2024-04-17 08:24:47.316215] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:14.028 [2024-04-17 08:24:47.332145] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:14.028 [2024-04-17 08:24:47.332184] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:14.028 [2024-04-17 08:24:47.346348] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:14.028 [2024-04-17 08:24:47.346383] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:14.291 [2024-04-17 08:24:47.362750] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:14.291 [2024-04-17 08:24:47.362791] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:14.291 [2024-04-17 08:24:47.378044] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:14.291 [2024-04-17 08:24:47.378081] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:14.291 [2024-04-17 08:24:47.393947] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:14.291 [2024-04-17 08:24:47.393983] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:14.291 [2024-04-17 08:24:47.405829] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:14.291 [2024-04-17 08:24:47.405865] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:14.291 [2024-04-17 08:24:47.421643] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:14.291 [2024-04-17 08:24:47.421680] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:14.291 [2024-04-17 08:24:47.438096] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:14.291 [2024-04-17 08:24:47.438131] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:14.291 [2024-04-17 08:24:47.454369] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:14.291 [2024-04-17 08:24:47.454404] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:14.291 [2024-04-17 08:24:47.470868] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:14.291 [2024-04-17 08:24:47.470906] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:14.291 [2024-04-17 08:24:47.487975] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:14.291 [2024-04-17 08:24:47.488018] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:14.291 [2024-04-17 08:24:47.504862] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:14.291 [2024-04-17 08:24:47.504902] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:14.291 [2024-04-17 08:24:47.521776] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:14.291 [2024-04-17 08:24:47.521810] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:14.291 [2024-04-17 08:24:47.538587] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:14.291 [2024-04-17 08:24:47.538621] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:14.291 [2024-04-17 08:24:47.555448] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:14.291 [2024-04-17 08:24:47.555479] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:14.291 [2024-04-17 08:24:47.572464] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:14.291 [2024-04-17 08:24:47.572499] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:14.291 [2024-04-17 08:24:47.588834] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:14.291 [2024-04-17 08:24:47.588872] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:14.291 [2024-04-17 08:24:47.606234] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:14.291 [2024-04-17 08:24:47.606273] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:14.554 [2024-04-17 08:24:47.622773] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:14.554 [2024-04-17 08:24:47.622812] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:14.554 [2024-04-17 08:24:47.639014] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:14.554 [2024-04-17 08:24:47.639049] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:14.554 [2024-04-17 08:24:47.654595] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:14.554 [2024-04-17 08:24:47.654634] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:14.554 [2024-04-17 08:24:47.670579] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:14.554 [2024-04-17 08:24:47.670610] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:14.554 [2024-04-17 08:24:47.686218] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:14.554 [2024-04-17 08:24:47.686252] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:14.554 [2024-04-17 08:24:47.697684] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:14.554 [2024-04-17 08:24:47.697718] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:14.554 [2024-04-17 08:24:47.712666] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:14.554 [2024-04-17 08:24:47.712698] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:14.554 [2024-04-17 08:24:47.728098] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:14.554 [2024-04-17 08:24:47.728133] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:14.554 [2024-04-17 08:24:47.742400] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:14.554 [2024-04-17 08:24:47.742434] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:14.554 [2024-04-17 08:24:47.756469] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:14.554 [2024-04-17 08:24:47.756499] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:14.554 [2024-04-17 08:24:47.767164] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:14.554 [2024-04-17 08:24:47.767196] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:14.554 [2024-04-17 08:24:47.781793] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:14.554 [2024-04-17 08:24:47.781825] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:14.554 [2024-04-17 08:24:47.795723] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:14.554 [2024-04-17 08:24:47.795753] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:14.554 [2024-04-17 08:24:47.809689] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:14.554 [2024-04-17 08:24:47.809720] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:14.554 [2024-04-17 08:24:47.823355] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:14.554 [2024-04-17 08:24:47.823388] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:14.554 [2024-04-17 08:24:47.838851] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:14.554 [2024-04-17 08:24:47.838881] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:14.554 [2024-04-17 08:24:47.855410] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:14.554 [2024-04-17 08:24:47.855443] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:14.554 [2024-04-17 08:24:47.871164] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:14.554 [2024-04-17 08:24:47.871197] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:14.815 [2024-04-17 08:24:47.885805] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:14.815 [2024-04-17 08:24:47.885840] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:14.815 [2024-04-17 08:24:47.896621] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:14.815 [2024-04-17 08:24:47.896652] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:14.815 [2024-04-17 08:24:47.911651] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:14.815 [2024-04-17 08:24:47.911678] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:14.815 [2024-04-17 08:24:47.927418] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:14.815 [2024-04-17 08:24:47.927448] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:14.815 [2024-04-17 08:24:47.941224] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:14.815 [2024-04-17 08:24:47.941267] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:14.815 [2024-04-17 08:24:47.955598] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:14.815 [2024-04-17 08:24:47.955630] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:14.815 [2024-04-17 08:24:47.967426] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:14.815 [2024-04-17 08:24:47.967459] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:14.815 [2024-04-17 08:24:47.982137] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:14.815 [2024-04-17 08:24:47.982171] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:14.815 [2024-04-17 08:24:47.993466] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:14.815 [2024-04-17 08:24:47.993501] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:14.815 [2024-04-17 08:24:48.007886] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:14.815 [2024-04-17 08:24:48.007919] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:14.815 [2024-04-17 08:24:48.018834] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:14.815 [2024-04-17 08:24:48.018868] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:14.815 [2024-04-17 08:24:48.034404] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:14.815 [2024-04-17 08:24:48.034436] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:14.815 [2024-04-17 08:24:48.051184] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:14.815 [2024-04-17 08:24:48.051219] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:14.815 [2024-04-17 08:24:48.067389] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:14.815 [2024-04-17 08:24:48.067422] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:14.815 [2024-04-17 08:24:48.083453] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:14.815 [2024-04-17 08:24:48.083486] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:14.815 [2024-04-17 08:24:48.098111] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:14.815 [2024-04-17 08:24:48.098144] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:14.815 [2024-04-17 08:24:48.113213] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:14.815 [2024-04-17 08:24:48.113261] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:14.815 [2024-04-17 08:24:48.128792] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:14.815 [2024-04-17 08:24:48.128824] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:14.815 [2024-04-17 08:24:48.143246] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:14.815 [2024-04-17 08:24:48.143278] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:15.074 [2024-04-17 08:24:48.157992] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:15.074 [2024-04-17 08:24:48.158027] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:15.074 [2024-04-17 08:24:48.168573] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:15.074 [2024-04-17 08:24:48.168604] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:15.074 [2024-04-17 08:24:48.184030] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:15.074 [2024-04-17 08:24:48.184063] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:15.074 [2024-04-17 08:24:48.199679] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:15.074 [2024-04-17 08:24:48.199715] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:15.074 [2024-04-17 08:24:48.210499] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:15.074 [2024-04-17 08:24:48.210531] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:15.074 [2024-04-17 08:24:48.225687] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:15.074 [2024-04-17 08:24:48.225721] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:15.074 [2024-04-17 08:24:48.241392] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:15.074 [2024-04-17 08:24:48.241422] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:15.074 [2024-04-17 08:24:48.255748] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:15.074 [2024-04-17 08:24:48.255781] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:15.074 [2024-04-17 08:24:48.266806] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:15.074 [2024-04-17 08:24:48.266838] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:15.074 [2024-04-17 08:24:48.281448] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:15.074 [2024-04-17 08:24:48.281478] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:15.074 [2024-04-17 08:24:48.292557] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:15.074 [2024-04-17 08:24:48.292587] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:15.074 [2024-04-17 08:24:48.306948] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:15.074 [2024-04-17 08:24:48.306984] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:15.074 [2024-04-17 08:24:48.321563] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:15.074 [2024-04-17 08:24:48.321597] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:15.074 [2024-04-17 08:24:48.336760] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:15.074 [2024-04-17 08:24:48.336793] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:15.074 [2024-04-17 08:24:48.350885] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:15.074 [2024-04-17 08:24:48.350917] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:15.074 [2024-04-17 08:24:48.366635] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:15.074 [2024-04-17 08:24:48.366671] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:15.074 [2024-04-17 08:24:48.381695] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:15.074 [2024-04-17 08:24:48.381729] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:15.074 [2024-04-17 08:24:48.396096] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:15.074 [2024-04-17 08:24:48.396132] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:15.333 [2024-04-17 08:24:48.411628] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:15.333 [2024-04-17 08:24:48.411664] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:15.333 [2024-04-17 08:24:48.427427] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:15.333 [2024-04-17 08:24:48.427461] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:15.333 [2024-04-17 08:24:48.442000] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:15.333 [2024-04-17 08:24:48.442035] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:15.333 [2024-04-17 08:24:48.457114] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:15.333 [2024-04-17 08:24:48.457146] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:15.333 [2024-04-17 08:24:48.471617] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:15.333 [2024-04-17 08:24:48.471651] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:15.333 [2024-04-17 08:24:48.487574] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:15.333 [2024-04-17 08:24:48.487610] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:15.333 [2024-04-17 08:24:48.503213] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:15.333 [2024-04-17 08:24:48.503247] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:15.333 [2024-04-17 08:24:48.518925] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:15.333 [2024-04-17 08:24:48.518960] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:15.333 [2024-04-17 08:24:48.533587] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:15.333 [2024-04-17 08:24:48.533619] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:15.333 [2024-04-17 08:24:48.550350] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:15.333 [2024-04-17 08:24:48.550400] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:15.333 [2024-04-17 08:24:48.566695] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:15.333 [2024-04-17 08:24:48.566732] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:15.333 [2024-04-17 08:24:48.583479] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:15.333 [2024-04-17 08:24:48.583513] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:15.333 [2024-04-17 08:24:48.599657] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:15.333 [2024-04-17 08:24:48.599691] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:15.333 [2024-04-17 08:24:48.616226] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:15.333 [2024-04-17 08:24:48.616258] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:15.333 [2024-04-17 08:24:48.632243] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:15.333 [2024-04-17 08:24:48.632274] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:15.333 [2024-04-17 08:24:48.646653] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:15.333 [2024-04-17 08:24:48.646685] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:15.333 [2024-04-17 08:24:48.662847] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:15.333 [2024-04-17 08:24:48.662879] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:15.594 [2024-04-17 08:24:48.679001] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:15.594 [2024-04-17 08:24:48.679033] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:15.594 [2024-04-17 08:24:48.695288] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:15.594 [2024-04-17 08:24:48.695330] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:15.594 [2024-04-17 08:24:48.706917] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:15.594 [2024-04-17 08:24:48.706952] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:15.594 [2024-04-17 08:24:48.722341] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:15.594 [2024-04-17 08:24:48.722375] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:15.594 [2024-04-17 08:24:48.738361] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:15.594 [2024-04-17 08:24:48.738407] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:15.594 [2024-04-17 08:24:48.749986] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:15.594 [2024-04-17 08:24:48.750019] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:15.594 [2024-04-17 08:24:48.765482] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:15.594 [2024-04-17 08:24:48.765517] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:15.594 [2024-04-17 08:24:48.781067] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:15.594 [2024-04-17 08:24:48.781098] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:15.594 [2024-04-17 08:24:48.792939] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:15.594 [2024-04-17 08:24:48.792977] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:15.594 [2024-04-17 08:24:48.808958] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:15.594 [2024-04-17 08:24:48.808991] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:15.594 [2024-04-17 08:24:48.825377] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:15.594 [2024-04-17 08:24:48.825412] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:15.594 [2024-04-17 08:24:48.842220] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:15.594 [2024-04-17 08:24:48.842258] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:15.594 [2024-04-17 08:24:48.858752] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:15.594 [2024-04-17 08:24:48.858787] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:15.594 [2024-04-17 08:24:48.875649] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:15.594 [2024-04-17 08:24:48.875685] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:15.594 [2024-04-17 08:24:48.892291] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:15.594 [2024-04-17 08:24:48.892335] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:15.594 [2024-04-17 08:24:48.908778] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:15.594 [2024-04-17 08:24:48.908813] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:15.594 [2024-04-17 08:24:48.925064] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:15.594 [2024-04-17 08:24:48.925101] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:15.853 [2024-04-17 08:24:48.941527] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:15.853 [2024-04-17 08:24:48.941560] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:15.853 [2024-04-17 08:24:48.957461] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:15.853 [2024-04-17 08:24:48.957492] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:15.853 [2024-04-17 08:24:48.973601] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:15.853 [2024-04-17 08:24:48.973634] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:15.853 [2024-04-17 08:24:48.985210] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:15.853 [2024-04-17 08:24:48.985240] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:15.853 [2024-04-17 08:24:49.001067] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:15.853 [2024-04-17 08:24:49.001097] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:15.853 [2024-04-17 08:24:49.016598] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:15.853 [2024-04-17 08:24:49.016633] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:15.853 [2024-04-17 08:24:49.031568] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:15.853 [2024-04-17 08:24:49.031601] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:15.853 [2024-04-17 08:24:49.048135] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:15.853 [2024-04-17 08:24:49.048166] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:15.853 [2024-04-17 08:24:49.064229] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:15.853 [2024-04-17 08:24:49.064265] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:15.853 [2024-04-17 08:24:49.080723] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:15.853 [2024-04-17 08:24:49.080756] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:15.853 [2024-04-17 08:24:49.096900] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:15.853 [2024-04-17 08:24:49.096931] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:15.853 [2024-04-17 08:24:49.108173] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:15.853 [2024-04-17 08:24:49.108206] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:15.853 [2024-04-17 08:24:49.123232] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:15.853 [2024-04-17 08:24:49.123266] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:15.853 [2024-04-17 08:24:49.139192] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:15.853 [2024-04-17 08:24:49.139228] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:15.853 [2024-04-17 08:24:49.150555] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:15.853 [2024-04-17 08:24:49.150589] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:15.853 [2024-04-17 08:24:49.166598] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:15.853 [2024-04-17 08:24:49.166634] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:15.853 [2024-04-17 08:24:49.182920] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:15.853 [2024-04-17 08:24:49.182954] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:16.112 [2024-04-17 08:24:49.199204] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:16.112 [2024-04-17 08:24:49.199239] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:16.112 [2024-04-17 08:24:49.215876] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:16.112 [2024-04-17 08:24:49.215911] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:16.112 [2024-04-17 08:24:49.230711] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:16.112 [2024-04-17 08:24:49.230744] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:16.112 [2024-04-17 08:24:49.246224] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:16.112 [2024-04-17 08:24:49.246278] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:16.112 [2024-04-17 08:24:49.257032] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:16.112 [2024-04-17 08:24:49.257066] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:16.112 [2024-04-17 08:24:49.273563] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:16.112 [2024-04-17 08:24:49.273596] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:16.112 [2024-04-17 08:24:49.288619] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:16.112 [2024-04-17 08:24:49.288654] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:16.112 [2024-04-17 08:24:49.299940] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:16.112 [2024-04-17 08:24:49.299974] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:16.112 [2024-04-17 08:24:49.315107] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:16.112 [2024-04-17 08:24:49.315138] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:16.112 [2024-04-17 08:24:49.330538] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:16.112 [2024-04-17 08:24:49.330569] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:16.112 [2024-04-17 08:24:49.344475] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:16.112 [2024-04-17 08:24:49.344516] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:16.112 [2024-04-17 08:24:49.358797] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:16.112 [2024-04-17 08:24:49.358851] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:16.112 [2024-04-17 08:24:49.373352] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:16.112 [2024-04-17 08:24:49.373411] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:16.112 [2024-04-17 08:24:49.388679] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:16.112 [2024-04-17 08:24:49.388740] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:16.112 [2024-04-17 08:24:49.407590] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:16.112 [2024-04-17 08:24:49.407656] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:16.112 [2024-04-17 08:24:49.422351] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:16.112 [2024-04-17 08:24:49.422401] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:16.112 [2024-04-17 08:24:49.433710] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:16.112 [2024-04-17 08:24:49.433761] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:16.372 [2024-04-17 08:24:49.449082] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:16.372 [2024-04-17 08:24:49.449140] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:16.372 [2024-04-17 08:24:49.465121] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:16.372 [2024-04-17 08:24:49.465176] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:16.372 [2024-04-17 08:24:49.480016] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:16.372 [2024-04-17 08:24:49.480071] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:16.372 [2024-04-17 08:24:49.496182] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:16.372 [2024-04-17 08:24:49.496236] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:16.372 [2024-04-17 08:24:49.508350] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:16.372 [2024-04-17 08:24:49.508392] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:16.372 [2024-04-17 08:24:49.523477] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:16.372 [2024-04-17 08:24:49.523526] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:16.372 [2024-04-17 08:24:49.539065] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:16.372 [2024-04-17 08:24:49.539120] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:16.372 [2024-04-17 08:24:49.553258] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:16.372 [2024-04-17 08:24:49.553323] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:16.372 [2024-04-17 08:24:49.564735] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:16.372 [2024-04-17 08:24:49.564790] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:16.372 [2024-04-17 08:24:49.579625] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:16.372 [2024-04-17 08:24:49.579677] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:16.372 [2024-04-17 08:24:49.595901] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:16.372 [2024-04-17 08:24:49.595953] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:16.372 [2024-04-17 08:24:49.607665] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:16.372 [2024-04-17 08:24:49.607702] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:16.372 [2024-04-17 08:24:49.623423] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:16.372 [2024-04-17 08:24:49.623456] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:16.372 [2024-04-17 08:24:49.640482] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:16.372 [2024-04-17 08:24:49.640534] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:16.372 [2024-04-17 08:24:49.656964] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:16.372 [2024-04-17 08:24:49.657010] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:16.372 [2024-04-17 08:24:49.674222] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:16.372 [2024-04-17 08:24:49.674268] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:16.372 [2024-04-17 08:24:49.691370] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:16.372 [2024-04-17 08:24:49.691427] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:16.632 [2024-04-17 08:24:49.708197] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:16.632 [2024-04-17 08:24:49.708238] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:16.632 [2024-04-17 08:24:49.725063] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:16.632 [2024-04-17 08:24:49.725101] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:16.632 [2024-04-17 08:24:49.742669] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:16.632 [2024-04-17 08:24:49.742705] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:16.632 [2024-04-17 08:24:49.758039] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:16.632 [2024-04-17 08:24:49.758075] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:16.632 [2024-04-17 08:24:49.775332] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:16.632 [2024-04-17 08:24:49.775369] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:16.632 [2024-04-17 08:24:49.791541] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:16.632 [2024-04-17 08:24:49.791580] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:16.632 [2024-04-17 08:24:49.808762] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:16.632 [2024-04-17 08:24:49.808799] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:16.632 [2024-04-17 08:24:49.825731] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:16.632 [2024-04-17 08:24:49.825770] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:16.632 [2024-04-17 08:24:49.842805] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:16.632 [2024-04-17 08:24:49.842842] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:16.632 [2024-04-17 08:24:49.859632] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:16.632 [2024-04-17 08:24:49.859671] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:16.632 [2024-04-17 08:24:49.874728] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:16.632 [2024-04-17 08:24:49.874767] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:16.632 [2024-04-17 08:24:49.885505] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:16.632 [2024-04-17 08:24:49.885542] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:16.632 [2024-04-17 08:24:49.901317] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:16.632 [2024-04-17 08:24:49.901366] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:16.632 [2024-04-17 08:24:49.918834] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:16.632 [2024-04-17 08:24:49.918887] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:16.632 [2024-04-17 08:24:49.934438] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:16.632 [2024-04-17 08:24:49.934483] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:16.632 [2024-04-17 08:24:49.955114] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:16.632 [2024-04-17 08:24:49.955161] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:16.892 [2024-04-17 08:24:49.966029] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:16.892 [2024-04-17 08:24:49.966079] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:16.892 [2024-04-17 08:24:49.982001] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:16.892 [2024-04-17 08:24:49.982050] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:16.892 [2024-04-17 08:24:49.998343] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:16.892 [2024-04-17 08:24:49.998389] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:16.892 [2024-04-17 08:24:50.015751] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:16.892 [2024-04-17 08:24:50.015791] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:16.892 [2024-04-17 08:24:50.032044] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:16.892 [2024-04-17 08:24:50.032098] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:16.892 [2024-04-17 08:24:50.048749] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:16.892 [2024-04-17 08:24:50.048802] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:16.892 [2024-04-17 08:24:50.066241] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:16.892 [2024-04-17 08:24:50.066296] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:16.892 [2024-04-17 08:24:50.082968] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:16.892 [2024-04-17 08:24:50.083023] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:16.892 [2024-04-17 08:24:50.100227] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:16.892 [2024-04-17 08:24:50.100281] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:16.892 [2024-04-17 08:24:50.115678] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:16.892 [2024-04-17 08:24:50.115722] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:16.892 [2024-04-17 08:24:50.124381] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:16.892 [2024-04-17 08:24:50.124431] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:16.892 [2024-04-17 08:24:50.135877] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:16.892 [2024-04-17 08:24:50.135926] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:16.892 [2024-04-17 08:24:50.151321] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:16.892 [2024-04-17 08:24:50.151371] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:16.892 [2024-04-17 08:24:50.169250] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:16.892 [2024-04-17 08:24:50.169313] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:16.892 [2024-04-17 08:24:50.189431] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:16.892 [2024-04-17 08:24:50.189480] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:16.892 [2024-04-17 08:24:50.206839] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:16.892 [2024-04-17 08:24:50.206895] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:16.892 [2024-04-17 08:24:50.218179] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:16.892 [2024-04-17 08:24:50.218216] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:17.151 [2024-04-17 08:24:50.226409] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:17.151 [2024-04-17 08:24:50.226445] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:17.151 [2024-04-17 08:24:50.236888] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:17.151 [2024-04-17 08:24:50.236930] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:17.151 [2024-04-17 08:24:50.246482] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:17.151 [2024-04-17 08:24:50.246532] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:17.151 [2024-04-17 08:24:50.262141] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:17.151 [2024-04-17 08:24:50.262204] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:17.151 [2024-04-17 08:24:50.280291] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:17.151 [2024-04-17 08:24:50.280359] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:17.151 [2024-04-17 08:24:50.296118] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:17.151 [2024-04-17 08:24:50.296177] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:17.151 [2024-04-17 08:24:50.314058] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:17.151 [2024-04-17 08:24:50.314117] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:17.151 [2024-04-17 08:24:50.330873] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:17.151 [2024-04-17 08:24:50.330931] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:17.151 [2024-04-17 08:24:50.347721] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:17.151 [2024-04-17 08:24:50.347781] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:17.151 [2024-04-17 08:24:50.358505] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:17.151 [2024-04-17 08:24:50.358554] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:17.151 [2024-04-17 08:24:50.374766] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:17.151 [2024-04-17 08:24:50.374824] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:17.151 [2024-04-17 08:24:50.390647] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:17.151 [2024-04-17 08:24:50.390707] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:17.151 [2024-04-17 08:24:50.408158] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:17.151 [2024-04-17 08:24:50.408214] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:17.151 [2024-04-17 08:24:50.424175] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:17.151 [2024-04-17 08:24:50.424237] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:17.151 [2024-04-17 08:24:50.441694] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:17.151 [2024-04-17 08:24:50.441753] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:17.151 [2024-04-17 08:24:50.457720] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:17.151 [2024-04-17 08:24:50.457778] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:17.151 [2024-04-17 08:24:50.474822] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:17.151 [2024-04-17 08:24:50.474879] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:17.411 [2024-04-17 08:24:50.490431] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:17.411 [2024-04-17 08:24:50.490486] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:17.411 [2024-04-17 08:24:50.507781] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:17.411 [2024-04-17 08:24:50.507824] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:17.411 [2024-04-17 08:24:50.523887] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:17.411 [2024-04-17 08:24:50.523927] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:17.411 [2024-04-17 08:24:50.534938] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:17.411 [2024-04-17 08:24:50.534976] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:17.411 [2024-04-17 08:24:50.542770] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:17.411 [2024-04-17 08:24:50.542806] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:17.411 [2024-04-17 08:24:50.554901] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:17.411 [2024-04-17 08:24:50.554955] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:17.411 [2024-04-17 08:24:50.565732] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:17.411 [2024-04-17 08:24:50.565784] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:17.411 [2024-04-17 08:24:50.573469] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:17.411 [2024-04-17 08:24:50.573511] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:17.411 [2024-04-17 08:24:50.585445] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:17.411 [2024-04-17 08:24:50.585483] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:17.411 [2024-04-17 08:24:50.596508] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:17.411 [2024-04-17 08:24:50.596553] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:17.411 [2024-04-17 08:24:50.613015] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:17.411 [2024-04-17 08:24:50.613076] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:17.411 [2024-04-17 08:24:50.624545] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:17.411 [2024-04-17 08:24:50.624589] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:17.411 [2024-04-17 08:24:50.632799] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:17.411 [2024-04-17 08:24:50.632836] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:17.411 [2024-04-17 08:24:50.644668] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:17.411 [2024-04-17 08:24:50.644713] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:17.411 [2024-04-17 08:24:50.656238] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:17.411 [2024-04-17 08:24:50.656298] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:17.411 [2024-04-17 08:24:50.664736] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:17.411 [2024-04-17 08:24:50.664781] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:17.411 [2024-04-17 08:24:50.676447] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:17.411 [2024-04-17 08:24:50.676490] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:17.411 [2024-04-17 08:24:50.687359] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:17.411 [2024-04-17 08:24:50.687420] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:17.411 [2024-04-17 08:24:50.702955] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:17.411 [2024-04-17 08:24:50.702996] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:17.411 [2024-04-17 08:24:50.720216] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:17.411 [2024-04-17 08:24:50.720266] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:17.411 [2024-04-17 08:24:50.737552] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:17.411 [2024-04-17 08:24:50.737603] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:17.713 [2024-04-17 08:24:50.752706] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:17.713 [2024-04-17 08:24:50.752755] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:17.713 [2024-04-17 08:24:50.764320] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:17.713 [2024-04-17 08:24:50.764364] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:17.713 [2024-04-17 08:24:50.780081] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:17.713 [2024-04-17 08:24:50.780131] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:17.713 [2024-04-17 08:24:50.796214] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:17.713 [2024-04-17 08:24:50.796260] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:17.713 [2024-04-17 08:24:50.813745] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:17.713 [2024-04-17 08:24:50.813784] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:17.713 [2024-04-17 08:24:50.830616] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:17.713 [2024-04-17 08:24:50.830650] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:17.713 [2024-04-17 08:24:50.847129] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:17.713 [2024-04-17 08:24:50.847166] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:17.713 [2024-04-17 08:24:50.863236] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:17.713 [2024-04-17 08:24:50.863271] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:17.713 [2024-04-17 08:24:50.880109] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:17.713 [2024-04-17 08:24:50.880144] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:17.713 [2024-04-17 08:24:50.897321] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:17.713 [2024-04-17 08:24:50.897354] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:17.713 [2024-04-17 08:24:50.908506] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:17.713 [2024-04-17 08:24:50.908539] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:17.713 [2024-04-17 08:24:50.916724] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:17.713 [2024-04-17 08:24:50.916757] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:17.713 [2024-04-17 08:24:50.927855] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:17.713 [2024-04-17 08:24:50.927887] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:17.713 [2024-04-17 08:24:50.943085] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:17.713 [2024-04-17 08:24:50.943118] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:17.713 [2024-04-17 08:24:50.953969] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:17.713 [2024-04-17 08:24:50.954005] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:17.713 [2024-04-17 08:24:50.961777] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:17.713 [2024-04-17 08:24:50.961808] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:17.713 [2024-04-17 08:24:50.976577] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:17.713 [2024-04-17 08:24:50.976622] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:17.713 [2024-04-17 08:24:50.993120] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:17.714 [2024-04-17 08:24:50.993161] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:17.714 [2024-04-17 08:24:51.009282] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:17.714 [2024-04-17 08:24:51.009328] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:17.714 [2024-04-17 08:24:51.026460] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:17.714 [2024-04-17 08:24:51.026493] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:17.714 [2024-04-17 08:24:51.043615] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:17.714 [2024-04-17 08:24:51.043661] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:17.972 [2024-04-17 08:24:51.058587] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:17.972 [2024-04-17 08:24:51.058623] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:17.972 [2024-04-17 08:24:51.069675] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:17.972 [2024-04-17 08:24:51.069714] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:17.972 [2024-04-17 08:24:51.085403] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:17.972 [2024-04-17 08:24:51.085440] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:17.972 [2024-04-17 08:24:51.102400] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:17.972 [2024-04-17 08:24:51.102450] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:17.972 [2024-04-17 08:24:51.118965] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:17.972 [2024-04-17 08:24:51.119021] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:17.972 [2024-04-17 08:24:51.136156] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:17.972 [2024-04-17 08:24:51.136207] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:17.972 [2024-04-17 08:24:51.153239] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:17.972 [2024-04-17 08:24:51.153287] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:17.972 [2024-04-17 08:24:51.169854] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:17.972 [2024-04-17 08:24:51.169911] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:17.972 [2024-04-17 08:24:51.186839] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:17.972 [2024-04-17 08:24:51.186887] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:17.972 [2024-04-17 08:24:51.203823] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:17.972 [2024-04-17 08:24:51.203877] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:17.972 [2024-04-17 08:24:51.220470] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:17.972 [2024-04-17 08:24:51.220526] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:17.972 [2024-04-17 08:24:51.237933] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:17.972 [2024-04-17 08:24:51.237986] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:17.972 [2024-04-17 08:24:51.253168] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:17.972 [2024-04-17 08:24:51.253228] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:17.972 [2024-04-17 08:24:51.264295] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:17.972 [2024-04-17 08:24:51.264357] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:17.972 [2024-04-17 08:24:51.280257] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:17.972 [2024-04-17 08:24:51.280300] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:17.972 [2024-04-17 08:24:51.297530] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:17.972 [2024-04-17 08:24:51.297568] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:18.231 [2024-04-17 08:24:51.314553] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:18.231 [2024-04-17 08:24:51.314606] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:18.231 [2024-04-17 08:24:51.331167] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:18.231 [2024-04-17 08:24:51.331226] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:18.231 [2024-04-17 08:24:51.347834] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:18.231 [2024-04-17 08:24:51.347892] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:18.231 [2024-04-17 08:24:51.364153] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:18.231 [2024-04-17 08:24:51.364208] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:18.231 [2024-04-17 08:24:51.381449] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:18.231 [2024-04-17 08:24:51.381504] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:18.231 [2024-04-17 08:24:51.397988] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:18.231 [2024-04-17 08:24:51.398043] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:18.231 [2024-04-17 08:24:51.415405] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:18.231 [2024-04-17 08:24:51.415487] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:18.231 [2024-04-17 08:24:51.431165] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:18.231 [2024-04-17 08:24:51.431218] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:18.231 [2024-04-17 08:24:51.447616] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:18.231 [2024-04-17 08:24:51.447671] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:18.231 [2024-04-17 08:24:51.465051] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:18.231 [2024-04-17 08:24:51.465097] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:18.231 [2024-04-17 08:24:51.480559] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:18.231 [2024-04-17 08:24:51.480610] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:18.231 [2024-04-17 08:24:51.497632] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:18.231 [2024-04-17 08:24:51.497685] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:18.231 [2024-04-17 08:24:51.514779] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:18.231 [2024-04-17 08:24:51.514831] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:18.231 [2024-04-17 08:24:51.531473] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:18.231 [2024-04-17 08:24:51.531533] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:18.231 [2024-04-17 08:24:51.547723] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:18.231 [2024-04-17 08:24:51.547782] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:18.490 [2024-04-17 08:24:51.564685] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:18.490 [2024-04-17 08:24:51.564729] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:18.490 [2024-04-17 08:24:51.581335] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:18.490 [2024-04-17 08:24:51.581376] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:18.490 [2024-04-17 08:24:51.598593] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:18.490 [2024-04-17 08:24:51.598648] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:18.490 [2024-04-17 08:24:51.614722] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:18.490 [2024-04-17 08:24:51.614767] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:18.490 [2024-04-17 08:24:51.632720] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:18.490 [2024-04-17 08:24:51.632757] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:18.490 [2024-04-17 08:24:51.648281] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:18.490 [2024-04-17 08:24:51.648342] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:18.490 [2024-04-17 08:24:51.665494] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:18.490 [2024-04-17 08:24:51.665534] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:18.490 [2024-04-17 08:24:51.682276] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:18.490 [2024-04-17 08:24:51.682324] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:18.490 [2024-04-17 08:24:51.699099] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:18.490 [2024-04-17 08:24:51.699135] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:18.490 [2024-04-17 08:24:51.714617] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:18.490 [2024-04-17 08:24:51.714652] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:18.490 [2024-04-17 08:24:51.726383] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:18.490 [2024-04-17 08:24:51.726417] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:18.490 [2024-04-17 08:24:51.742517] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:18.490 [2024-04-17 08:24:51.742555] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:18.490 00:28:18.490 Latency(us) 00:28:18.490 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:18.490 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:28:18.490 Nvme1n1 : 5.01 14482.99 113.15 0.00 0.00 8829.85 3133.71 17743.37 00:28:18.490 =================================================================================================================== 00:28:18.490 Total : 14482.99 113.15 0.00 0.00 8829.85 3133.71 17743.37 00:28:18.490 [2024-04-17 08:24:51.754093] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:18.490 [2024-04-17 08:24:51.754127] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:18.490 [2024-04-17 08:24:51.766069] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:18.490 [2024-04-17 08:24:51.766099] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:18.490 [2024-04-17 08:24:51.774054] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:18.490 [2024-04-17 08:24:51.774081] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:18.490 [2024-04-17 08:24:51.786039] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:18.490 [2024-04-17 08:24:51.786070] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:18.490 [2024-04-17 08:24:51.798015] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:18.490 [2024-04-17 08:24:51.798047] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:18.490 [2024-04-17 08:24:51.809995] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:18.490 [2024-04-17 08:24:51.810025] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:18.747 [2024-04-17 08:24:51.821976] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:18.747 [2024-04-17 08:24:51.822006] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:18.747 [2024-04-17 08:24:51.833959] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:18.747 [2024-04-17 08:24:51.833989] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:18.747 [2024-04-17 08:24:51.845935] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:18.747 [2024-04-17 08:24:51.845965] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:18.747 [2024-04-17 08:24:51.857914] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:18.747 [2024-04-17 08:24:51.857945] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:18.747 [2024-04-17 08:24:51.869893] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:18.747 [2024-04-17 08:24:51.869916] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:18.747 [2024-04-17 08:24:51.881875] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:18.747 [2024-04-17 08:24:51.881899] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:18.747 [2024-04-17 08:24:51.893860] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:18.747 [2024-04-17 08:24:51.893879] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:18.748 [2024-04-17 08:24:51.905842] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:18.748 [2024-04-17 08:24:51.905870] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:18.748 [2024-04-17 08:24:51.917822] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:18.748 [2024-04-17 08:24:51.917841] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:18.748 [2024-04-17 08:24:51.929801] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:18.748 [2024-04-17 08:24:51.929818] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:18.748 [2024-04-17 08:24:51.941785] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:18.748 [2024-04-17 08:24:51.941811] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:18.748 [2024-04-17 08:24:51.953768] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:18.748 [2024-04-17 08:24:51.953795] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:18.748 [2024-04-17 08:24:51.965746] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:18.748 [2024-04-17 08:24:51.965768] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:18.748 [2024-04-17 08:24:51.973731] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:18.748 [2024-04-17 08:24:51.973753] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:18.748 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (63021) - No such process 00:28:18.748 08:24:51 -- target/zcopy.sh@49 -- # wait 63021 00:28:18.748 08:24:51 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:18.748 08:24:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:18.748 08:24:51 -- common/autotest_common.sh@10 -- # set +x 00:28:18.748 08:24:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:18.748 08:24:51 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:18.748 08:24:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:18.748 08:24:51 -- common/autotest_common.sh@10 -- # set +x 00:28:18.748 delay0 00:28:18.748 08:24:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:18.748 08:24:51 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:28:18.748 08:24:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:18.748 08:24:51 -- common/autotest_common.sh@10 -- # set +x 00:28:18.748 08:24:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:18.748 08:24:52 -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:28:19.006 [2024-04-17 08:24:52.169968] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:28:25.580 Initializing NVMe Controllers 00:28:25.580 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:25.580 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:25.580 Initialization complete. Launching workers. 00:28:25.580 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 151 00:28:25.580 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 438, failed to submit 33 00:28:25.580 success 314, unsuccess 124, failed 0 00:28:25.580 08:24:58 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:28:25.580 08:24:58 -- target/zcopy.sh@60 -- # nvmftestfini 00:28:25.580 08:24:58 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:25.580 08:24:58 -- nvmf/common.sh@116 -- # sync 00:28:25.580 08:24:58 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:28:25.580 08:24:58 -- nvmf/common.sh@119 -- # set +e 00:28:25.580 08:24:58 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:25.580 08:24:58 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:28:25.580 rmmod nvme_tcp 00:28:25.580 rmmod nvme_fabrics 00:28:25.580 rmmod nvme_keyring 00:28:25.580 08:24:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:25.580 08:24:58 -- nvmf/common.sh@123 -- # set -e 00:28:25.580 08:24:58 -- nvmf/common.sh@124 -- # return 0 00:28:25.580 08:24:58 -- nvmf/common.sh@477 -- # '[' -n 62870 ']' 00:28:25.580 08:24:58 -- nvmf/common.sh@478 -- # killprocess 62870 00:28:25.580 08:24:58 -- common/autotest_common.sh@926 -- # '[' -z 62870 ']' 00:28:25.580 08:24:58 -- common/autotest_common.sh@930 -- # kill -0 62870 00:28:25.580 08:24:58 -- common/autotest_common.sh@931 -- # uname 00:28:25.580 08:24:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:25.580 08:24:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 62870 00:28:25.580 08:24:58 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:28:25.580 08:24:58 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:28:25.580 killing process with pid 62870 00:28:25.580 08:24:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 62870' 00:28:25.580 08:24:58 -- common/autotest_common.sh@945 -- # kill 62870 00:28:25.580 08:24:58 -- common/autotest_common.sh@950 -- # wait 62870 00:28:25.580 08:24:58 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:25.580 08:24:58 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:25.580 08:24:58 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:25.580 08:24:58 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:25.580 08:24:58 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:25.580 08:24:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:25.580 08:24:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:25.580 08:24:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:25.580 08:24:58 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:28:25.580 00:28:25.580 real 0m24.534s 00:28:25.580 user 0m41.144s 00:28:25.580 sys 0m5.880s 00:28:25.580 08:24:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:25.580 08:24:58 -- common/autotest_common.sh@10 -- # set +x 00:28:25.580 ************************************ 00:28:25.580 END TEST nvmf_zcopy 00:28:25.580 ************************************ 00:28:25.580 08:24:58 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:28:25.580 08:24:58 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:25.580 08:24:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:25.580 08:24:58 -- common/autotest_common.sh@10 -- # set +x 00:28:25.580 ************************************ 00:28:25.580 START TEST nvmf_nmic 00:28:25.580 ************************************ 00:28:25.580 08:24:58 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:28:25.580 * Looking for test storage... 00:28:25.580 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:28:25.580 08:24:58 -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:25.580 08:24:58 -- nvmf/common.sh@7 -- # uname -s 00:28:25.580 08:24:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:25.580 08:24:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:25.580 08:24:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:25.580 08:24:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:25.580 08:24:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:25.580 08:24:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:25.580 08:24:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:25.580 08:24:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:25.580 08:24:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:25.580 08:24:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:25.580 08:24:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ce38300-f67f-48af-81f9-d51a7c54746d 00:28:25.580 08:24:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ce38300-f67f-48af-81f9-d51a7c54746d 00:28:25.580 08:24:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:25.580 08:24:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:25.580 08:24:58 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:25.580 08:24:58 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:25.580 08:24:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:25.580 08:24:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:25.580 08:24:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:25.580 08:24:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.580 08:24:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.580 08:24:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.580 08:24:58 -- paths/export.sh@5 -- # export PATH 00:28:25.580 08:24:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.580 08:24:58 -- nvmf/common.sh@46 -- # : 0 00:28:25.580 08:24:58 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:25.580 08:24:58 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:25.580 08:24:58 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:25.580 08:24:58 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:25.580 08:24:58 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:25.580 08:24:58 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:25.580 08:24:58 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:25.580 08:24:58 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:25.580 08:24:58 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:25.580 08:24:58 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:25.580 08:24:58 -- target/nmic.sh@14 -- # nvmftestinit 00:28:25.580 08:24:58 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:25.580 08:24:58 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:25.580 08:24:58 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:25.580 08:24:58 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:25.580 08:24:58 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:25.580 08:24:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:25.580 08:24:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:25.580 08:24:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:25.580 08:24:58 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:28:25.580 08:24:58 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:28:25.580 08:24:58 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:28:25.580 08:24:58 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:28:25.580 08:24:58 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:28:25.580 08:24:58 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:28:25.580 08:24:58 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:25.580 08:24:58 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:25.580 08:24:58 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:28:25.581 08:24:58 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:28:25.581 08:24:58 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:25.581 08:24:58 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:25.581 08:24:58 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:25.581 08:24:58 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:25.581 08:24:58 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:25.581 08:24:58 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:25.581 08:24:58 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:25.581 08:24:58 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:25.581 08:24:58 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:28:25.840 08:24:58 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:28:25.840 Cannot find device "nvmf_tgt_br" 00:28:25.840 08:24:58 -- nvmf/common.sh@154 -- # true 00:28:25.840 08:24:58 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:28:25.840 Cannot find device "nvmf_tgt_br2" 00:28:25.840 08:24:58 -- nvmf/common.sh@155 -- # true 00:28:25.840 08:24:58 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:28:25.840 08:24:58 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:28:25.840 Cannot find device "nvmf_tgt_br" 00:28:25.840 08:24:58 -- nvmf/common.sh@157 -- # true 00:28:25.840 08:24:58 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:28:25.840 Cannot find device "nvmf_tgt_br2" 00:28:25.840 08:24:58 -- nvmf/common.sh@158 -- # true 00:28:25.840 08:24:58 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:28:25.840 08:24:59 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:28:25.840 08:24:59 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:25.840 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:25.840 08:24:59 -- nvmf/common.sh@161 -- # true 00:28:25.840 08:24:59 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:25.840 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:25.840 08:24:59 -- nvmf/common.sh@162 -- # true 00:28:25.840 08:24:59 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:28:25.840 08:24:59 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:25.840 08:24:59 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:25.840 08:24:59 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:25.840 08:24:59 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:25.840 08:24:59 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:25.840 08:24:59 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:25.840 08:24:59 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:28:25.840 08:24:59 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:28:25.840 08:24:59 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:28:25.840 08:24:59 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:28:25.840 08:24:59 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:28:25.840 08:24:59 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:28:25.840 08:24:59 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:25.840 08:24:59 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:26.100 08:24:59 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:26.100 08:24:59 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:28:26.100 08:24:59 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:28:26.100 08:24:59 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:28:26.100 08:24:59 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:26.100 08:24:59 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:26.100 08:24:59 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:26.100 08:24:59 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:26.100 08:24:59 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:28:26.100 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:26.100 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:28:26.100 00:28:26.100 --- 10.0.0.2 ping statistics --- 00:28:26.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:26.100 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:28:26.100 08:24:59 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:28:26.100 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:26.100 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:28:26.100 00:28:26.100 --- 10.0.0.3 ping statistics --- 00:28:26.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:26.100 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:28:26.100 08:24:59 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:26.100 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:26.100 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:28:26.100 00:28:26.100 --- 10.0.0.1 ping statistics --- 00:28:26.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:26.100 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:28:26.100 08:24:59 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:26.100 08:24:59 -- nvmf/common.sh@421 -- # return 0 00:28:26.100 08:24:59 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:26.100 08:24:59 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:26.100 08:24:59 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:26.100 08:24:59 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:26.100 08:24:59 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:26.100 08:24:59 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:26.100 08:24:59 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:26.100 08:24:59 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:28:26.100 08:24:59 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:26.100 08:24:59 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:26.100 08:24:59 -- common/autotest_common.sh@10 -- # set +x 00:28:26.100 08:24:59 -- nvmf/common.sh@469 -- # nvmfpid=63340 00:28:26.100 08:24:59 -- nvmf/common.sh@470 -- # waitforlisten 63340 00:28:26.100 08:24:59 -- common/autotest_common.sh@819 -- # '[' -z 63340 ']' 00:28:26.100 08:24:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:26.100 08:24:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:26.100 08:24:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:26.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:26.100 08:24:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:26.100 08:24:59 -- common/autotest_common.sh@10 -- # set +x 00:28:26.100 08:24:59 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:26.100 [2024-04-17 08:24:59.348410] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:26.100 [2024-04-17 08:24:59.348990] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:26.360 [2024-04-17 08:24:59.494994] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:26.360 [2024-04-17 08:24:59.594104] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:26.360 [2024-04-17 08:24:59.594239] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:26.360 [2024-04-17 08:24:59.594247] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:26.360 [2024-04-17 08:24:59.594254] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:26.360 [2024-04-17 08:24:59.594378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:26.360 [2024-04-17 08:24:59.596340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:26.360 [2024-04-17 08:24:59.596456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:26.360 [2024-04-17 08:24:59.596461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:26.929 08:25:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:26.929 08:25:00 -- common/autotest_common.sh@852 -- # return 0 00:28:26.929 08:25:00 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:26.929 08:25:00 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:26.929 08:25:00 -- common/autotest_common.sh@10 -- # set +x 00:28:26.929 08:25:00 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:26.929 08:25:00 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:26.929 08:25:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:26.929 08:25:00 -- common/autotest_common.sh@10 -- # set +x 00:28:26.929 [2024-04-17 08:25:00.245744] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:27.189 08:25:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:27.189 08:25:00 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:27.189 08:25:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:27.189 08:25:00 -- common/autotest_common.sh@10 -- # set +x 00:28:27.189 Malloc0 00:28:27.189 08:25:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:27.189 08:25:00 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:28:27.189 08:25:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:27.189 08:25:00 -- common/autotest_common.sh@10 -- # set +x 00:28:27.189 08:25:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:27.189 08:25:00 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:27.189 08:25:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:27.189 08:25:00 -- common/autotest_common.sh@10 -- # set +x 00:28:27.189 08:25:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:27.189 08:25:00 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:27.189 08:25:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:27.189 08:25:00 -- common/autotest_common.sh@10 -- # set +x 00:28:27.189 [2024-04-17 08:25:00.321718] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:27.189 08:25:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:27.189 08:25:00 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:28:27.189 test case1: single bdev can't be used in multiple subsystems 00:28:27.189 08:25:00 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:28:27.189 08:25:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:27.189 08:25:00 -- common/autotest_common.sh@10 -- # set +x 00:28:27.189 08:25:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:27.189 08:25:00 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:27.189 08:25:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:27.189 08:25:00 -- common/autotest_common.sh@10 -- # set +x 00:28:27.189 08:25:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:27.189 08:25:00 -- target/nmic.sh@28 -- # nmic_status=0 00:28:27.189 08:25:00 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:28:27.189 08:25:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:27.189 08:25:00 -- common/autotest_common.sh@10 -- # set +x 00:28:27.189 [2024-04-17 08:25:00.357555] bdev.c:7935:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:28:27.189 [2024-04-17 08:25:00.357583] subsystem.c:1779:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:28:27.189 [2024-04-17 08:25:00.357590] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:27.189 request: 00:28:27.189 { 00:28:27.189 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:28:27.189 "namespace": { 00:28:27.189 "bdev_name": "Malloc0" 00:28:27.189 }, 00:28:27.189 "method": "nvmf_subsystem_add_ns", 00:28:27.189 "req_id": 1 00:28:27.189 } 00:28:27.189 Got JSON-RPC error response 00:28:27.189 response: 00:28:27.189 { 00:28:27.189 "code": -32602, 00:28:27.189 "message": "Invalid parameters" 00:28:27.189 } 00:28:27.189 08:25:00 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:28:27.189 08:25:00 -- target/nmic.sh@29 -- # nmic_status=1 00:28:27.189 08:25:00 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:28:27.189 08:25:00 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:28:27.189 Adding namespace failed - expected result. 00:28:27.189 08:25:00 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:28:27.189 test case2: host connect to nvmf target in multiple paths 00:28:27.190 08:25:00 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:27.190 08:25:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:27.190 08:25:00 -- common/autotest_common.sh@10 -- # set +x 00:28:27.190 [2024-04-17 08:25:00.373644] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:27.190 08:25:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:27.190 08:25:00 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2ce38300-f67f-48af-81f9-d51a7c54746d --hostid=2ce38300-f67f-48af-81f9-d51a7c54746d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:28:27.190 08:25:00 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2ce38300-f67f-48af-81f9-d51a7c54746d --hostid=2ce38300-f67f-48af-81f9-d51a7c54746d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:28:27.449 08:25:00 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:28:27.449 08:25:00 -- common/autotest_common.sh@1177 -- # local i=0 00:28:27.449 08:25:00 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:28:27.449 08:25:00 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:28:27.449 08:25:00 -- common/autotest_common.sh@1184 -- # sleep 2 00:28:29.356 08:25:02 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:28:29.356 08:25:02 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:28:29.356 08:25:02 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:28:29.356 08:25:02 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:28:29.356 08:25:02 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:28:29.356 08:25:02 -- common/autotest_common.sh@1187 -- # return 0 00:28:29.356 08:25:02 -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:28:29.356 [global] 00:28:29.356 thread=1 00:28:29.356 invalidate=1 00:28:29.356 rw=write 00:28:29.356 time_based=1 00:28:29.356 runtime=1 00:28:29.356 ioengine=libaio 00:28:29.356 direct=1 00:28:29.356 bs=4096 00:28:29.356 iodepth=1 00:28:29.356 norandommap=0 00:28:29.356 numjobs=1 00:28:29.356 00:28:29.616 verify_dump=1 00:28:29.616 verify_backlog=512 00:28:29.616 verify_state_save=0 00:28:29.616 do_verify=1 00:28:29.616 verify=crc32c-intel 00:28:29.616 [job0] 00:28:29.616 filename=/dev/nvme0n1 00:28:29.616 Could not set queue depth (nvme0n1) 00:28:29.616 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:28:29.616 fio-3.35 00:28:29.616 Starting 1 thread 00:28:30.993 00:28:30.993 job0: (groupid=0, jobs=1): err= 0: pid=63433: Wed Apr 17 08:25:03 2024 00:28:30.993 read: IOPS=3601, BW=14.1MiB/s (14.8MB/s)(14.1MiB/1001msec) 00:28:30.993 slat (nsec): min=6514, max=36586, avg=8592.31, stdev=1923.11 00:28:30.993 clat (usec): min=106, max=520, avg=145.66, stdev=18.18 00:28:30.993 lat (usec): min=114, max=532, avg=154.25, stdev=18.39 00:28:30.993 clat percentiles (usec): 00:28:30.993 | 1.00th=[ 113], 5.00th=[ 119], 10.00th=[ 123], 20.00th=[ 130], 00:28:30.993 | 30.00th=[ 137], 40.00th=[ 143], 50.00th=[ 147], 60.00th=[ 153], 00:28:30.993 | 70.00th=[ 157], 80.00th=[ 159], 90.00th=[ 165], 95.00th=[ 169], 00:28:30.993 | 99.00th=[ 182], 99.50th=[ 186], 99.90th=[ 200], 99.95th=[ 453], 00:28:30.993 | 99.99th=[ 523] 00:28:30.993 write: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 00:28:30.993 slat (usec): min=9, max=182, avg=15.02, stdev= 8.94 00:28:30.993 clat (usec): min=66, max=295, avg=91.30, stdev=12.89 00:28:30.993 lat (usec): min=77, max=424, avg=106.33, stdev=18.00 00:28:30.993 clat percentiles (usec): 00:28:30.993 | 1.00th=[ 71], 5.00th=[ 74], 10.00th=[ 77], 20.00th=[ 81], 00:28:30.993 | 30.00th=[ 85], 40.00th=[ 88], 50.00th=[ 91], 60.00th=[ 94], 00:28:30.993 | 70.00th=[ 97], 80.00th=[ 100], 90.00th=[ 106], 95.00th=[ 113], 00:28:30.993 | 99.00th=[ 130], 99.50th=[ 137], 99.90th=[ 155], 99.95th=[ 180], 00:28:30.993 | 99.99th=[ 297] 00:28:30.993 bw ( KiB/s): min=16351, max=16351, per=99.90%, avg=16351.00, stdev= 0.00, samples=1 00:28:30.993 iops : min= 4087, max= 4087, avg=4087.00, stdev= 0.00, samples=1 00:28:30.993 lat (usec) : 100=42.37%, 250=57.59%, 500=0.03%, 750=0.01% 00:28:30.993 cpu : usr=1.60%, sys=7.20%, ctx=7706, majf=0, minf=2 00:28:30.993 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:30.993 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:30.993 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:30.993 issued rwts: total=3605,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:30.993 latency : target=0, window=0, percentile=100.00%, depth=1 00:28:30.993 00:28:30.993 Run status group 0 (all jobs): 00:28:30.993 READ: bw=14.1MiB/s (14.8MB/s), 14.1MiB/s-14.1MiB/s (14.8MB/s-14.8MB/s), io=14.1MiB (14.8MB), run=1001-1001msec 00:28:30.993 WRITE: bw=16.0MiB/s (16.8MB/s), 16.0MiB/s-16.0MiB/s (16.8MB/s-16.8MB/s), io=16.0MiB (16.8MB), run=1001-1001msec 00:28:30.993 00:28:30.993 Disk stats (read/write): 00:28:30.993 nvme0n1: ios=3328/3584, merge=0/0, ticks=509/347, in_queue=856, util=91.08% 00:28:30.993 08:25:03 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:28:30.993 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:28:30.993 08:25:04 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:28:30.993 08:25:04 -- common/autotest_common.sh@1198 -- # local i=0 00:28:30.993 08:25:04 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:28:30.993 08:25:04 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:30.993 08:25:04 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:28:30.993 08:25:04 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:30.993 08:25:04 -- common/autotest_common.sh@1210 -- # return 0 00:28:30.993 08:25:04 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:28:30.993 08:25:04 -- target/nmic.sh@53 -- # nvmftestfini 00:28:30.993 08:25:04 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:30.993 08:25:04 -- nvmf/common.sh@116 -- # sync 00:28:30.993 08:25:04 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:28:30.993 08:25:04 -- nvmf/common.sh@119 -- # set +e 00:28:30.993 08:25:04 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:30.993 08:25:04 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:28:30.993 rmmod nvme_tcp 00:28:30.993 rmmod nvme_fabrics 00:28:30.993 rmmod nvme_keyring 00:28:30.993 08:25:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:30.993 08:25:04 -- nvmf/common.sh@123 -- # set -e 00:28:30.994 08:25:04 -- nvmf/common.sh@124 -- # return 0 00:28:30.994 08:25:04 -- nvmf/common.sh@477 -- # '[' -n 63340 ']' 00:28:30.994 08:25:04 -- nvmf/common.sh@478 -- # killprocess 63340 00:28:30.994 08:25:04 -- common/autotest_common.sh@926 -- # '[' -z 63340 ']' 00:28:30.994 08:25:04 -- common/autotest_common.sh@930 -- # kill -0 63340 00:28:30.994 08:25:04 -- common/autotest_common.sh@931 -- # uname 00:28:30.994 08:25:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:30.994 08:25:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 63340 00:28:30.994 08:25:04 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:30.994 08:25:04 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:30.994 08:25:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 63340' 00:28:30.994 killing process with pid 63340 00:28:30.994 08:25:04 -- common/autotest_common.sh@945 -- # kill 63340 00:28:30.994 08:25:04 -- common/autotest_common.sh@950 -- # wait 63340 00:28:31.252 08:25:04 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:31.252 08:25:04 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:31.252 08:25:04 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:31.252 08:25:04 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:31.252 08:25:04 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:31.252 08:25:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:31.252 08:25:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:31.252 08:25:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:31.252 08:25:04 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:28:31.252 00:28:31.252 real 0m5.762s 00:28:31.252 user 0m18.591s 00:28:31.252 sys 0m1.710s 00:28:31.252 08:25:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:31.252 08:25:04 -- common/autotest_common.sh@10 -- # set +x 00:28:31.252 ************************************ 00:28:31.252 END TEST nvmf_nmic 00:28:31.252 ************************************ 00:28:31.252 08:25:04 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:28:31.252 08:25:04 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:31.252 08:25:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:31.252 08:25:04 -- common/autotest_common.sh@10 -- # set +x 00:28:31.252 ************************************ 00:28:31.252 START TEST nvmf_fio_target 00:28:31.252 ************************************ 00:28:31.252 08:25:04 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:28:31.511 * Looking for test storage... 00:28:31.511 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:28:31.511 08:25:04 -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:31.511 08:25:04 -- nvmf/common.sh@7 -- # uname -s 00:28:31.511 08:25:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:31.511 08:25:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:31.511 08:25:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:31.511 08:25:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:31.511 08:25:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:31.511 08:25:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:31.511 08:25:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:31.511 08:25:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:31.511 08:25:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:31.512 08:25:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:31.512 08:25:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ce38300-f67f-48af-81f9-d51a7c54746d 00:28:31.512 08:25:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ce38300-f67f-48af-81f9-d51a7c54746d 00:28:31.512 08:25:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:31.512 08:25:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:31.512 08:25:04 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:31.512 08:25:04 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:31.512 08:25:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:31.512 08:25:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:31.512 08:25:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:31.512 08:25:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.512 08:25:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.512 08:25:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.512 08:25:04 -- paths/export.sh@5 -- # export PATH 00:28:31.512 08:25:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.512 08:25:04 -- nvmf/common.sh@46 -- # : 0 00:28:31.512 08:25:04 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:31.512 08:25:04 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:31.512 08:25:04 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:31.512 08:25:04 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:31.512 08:25:04 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:31.512 08:25:04 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:31.512 08:25:04 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:31.512 08:25:04 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:31.512 08:25:04 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:31.512 08:25:04 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:31.512 08:25:04 -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:31.512 08:25:04 -- target/fio.sh@16 -- # nvmftestinit 00:28:31.512 08:25:04 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:31.512 08:25:04 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:31.512 08:25:04 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:31.512 08:25:04 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:31.512 08:25:04 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:31.512 08:25:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:31.512 08:25:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:31.512 08:25:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:31.512 08:25:04 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:28:31.512 08:25:04 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:28:31.512 08:25:04 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:28:31.512 08:25:04 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:28:31.512 08:25:04 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:28:31.512 08:25:04 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:28:31.512 08:25:04 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:31.512 08:25:04 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:31.512 08:25:04 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:28:31.512 08:25:04 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:28:31.512 08:25:04 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:31.512 08:25:04 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:31.512 08:25:04 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:31.512 08:25:04 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:31.512 08:25:04 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:31.512 08:25:04 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:31.512 08:25:04 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:31.512 08:25:04 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:31.512 08:25:04 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:28:31.512 08:25:04 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:28:31.512 Cannot find device "nvmf_tgt_br" 00:28:31.512 08:25:04 -- nvmf/common.sh@154 -- # true 00:28:31.512 08:25:04 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:28:31.512 Cannot find device "nvmf_tgt_br2" 00:28:31.512 08:25:04 -- nvmf/common.sh@155 -- # true 00:28:31.512 08:25:04 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:28:31.512 08:25:04 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:28:31.512 Cannot find device "nvmf_tgt_br" 00:28:31.512 08:25:04 -- nvmf/common.sh@157 -- # true 00:28:31.512 08:25:04 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:28:31.512 Cannot find device "nvmf_tgt_br2" 00:28:31.512 08:25:04 -- nvmf/common.sh@158 -- # true 00:28:31.512 08:25:04 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:28:31.771 08:25:04 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:28:31.771 08:25:04 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:31.771 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:31.771 08:25:04 -- nvmf/common.sh@161 -- # true 00:28:31.772 08:25:04 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:31.772 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:31.772 08:25:04 -- nvmf/common.sh@162 -- # true 00:28:31.772 08:25:04 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:28:31.772 08:25:04 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:31.772 08:25:04 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:31.772 08:25:04 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:31.772 08:25:04 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:31.772 08:25:04 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:31.772 08:25:04 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:31.772 08:25:04 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:28:31.772 08:25:04 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:28:31.772 08:25:04 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:28:31.772 08:25:04 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:28:31.772 08:25:04 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:28:31.772 08:25:04 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:28:31.772 08:25:04 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:31.772 08:25:04 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:31.772 08:25:04 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:31.772 08:25:04 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:28:31.772 08:25:05 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:28:31.772 08:25:05 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:28:31.772 08:25:05 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:31.772 08:25:05 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:31.772 08:25:05 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:31.772 08:25:05 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:31.772 08:25:05 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:28:31.772 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:31.772 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:28:31.772 00:28:31.772 --- 10.0.0.2 ping statistics --- 00:28:31.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:31.772 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:28:31.772 08:25:05 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:28:31.772 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:31.772 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:28:31.772 00:28:31.772 --- 10.0.0.3 ping statistics --- 00:28:31.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:31.772 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:28:31.772 08:25:05 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:31.772 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:31.772 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:28:31.772 00:28:31.772 --- 10.0.0.1 ping statistics --- 00:28:31.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:31.772 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:28:31.772 08:25:05 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:31.772 08:25:05 -- nvmf/common.sh@421 -- # return 0 00:28:31.772 08:25:05 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:31.772 08:25:05 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:31.772 08:25:05 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:31.772 08:25:05 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:31.772 08:25:05 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:31.772 08:25:05 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:31.772 08:25:05 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:31.772 08:25:05 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:28:31.772 08:25:05 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:31.772 08:25:05 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:31.772 08:25:05 -- common/autotest_common.sh@10 -- # set +x 00:28:31.772 08:25:05 -- nvmf/common.sh@469 -- # nvmfpid=63619 00:28:31.772 08:25:05 -- nvmf/common.sh@470 -- # waitforlisten 63619 00:28:31.772 08:25:05 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:31.772 08:25:05 -- common/autotest_common.sh@819 -- # '[' -z 63619 ']' 00:28:31.772 08:25:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:31.772 08:25:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:31.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:31.772 08:25:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:31.772 08:25:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:31.772 08:25:05 -- common/autotest_common.sh@10 -- # set +x 00:28:32.031 [2024-04-17 08:25:05.146893] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:32.031 [2024-04-17 08:25:05.146962] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:32.031 [2024-04-17 08:25:05.290175] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:32.291 [2024-04-17 08:25:05.374685] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:32.291 [2024-04-17 08:25:05.374813] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:32.291 [2024-04-17 08:25:05.374820] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:32.291 [2024-04-17 08:25:05.374826] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:32.291 [2024-04-17 08:25:05.375062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:32.291 [2024-04-17 08:25:05.375300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:32.291 [2024-04-17 08:25:05.375324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:32.291 [2024-04-17 08:25:05.375170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:32.860 08:25:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:32.860 08:25:05 -- common/autotest_common.sh@852 -- # return 0 00:28:32.860 08:25:05 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:32.860 08:25:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:32.860 08:25:05 -- common/autotest_common.sh@10 -- # set +x 00:28:32.860 08:25:06 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:32.860 08:25:06 -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:33.119 [2024-04-17 08:25:06.249451] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:33.119 08:25:06 -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:33.379 08:25:06 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:28:33.379 08:25:06 -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:33.638 08:25:06 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:28:33.638 08:25:06 -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:33.638 08:25:06 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:28:33.638 08:25:06 -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:33.897 08:25:07 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:28:33.897 08:25:07 -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:28:34.156 08:25:07 -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:34.416 08:25:07 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:28:34.416 08:25:07 -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:34.676 08:25:07 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:28:34.676 08:25:07 -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:34.936 08:25:08 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:28:34.936 08:25:08 -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:28:35.196 08:25:08 -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:28:35.196 08:25:08 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:28:35.196 08:25:08 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:35.456 08:25:08 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:28:35.456 08:25:08 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:35.716 08:25:08 -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:35.975 [2024-04-17 08:25:09.073564] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:35.975 08:25:09 -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:28:35.975 08:25:09 -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:28:36.235 08:25:09 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2ce38300-f67f-48af-81f9-d51a7c54746d --hostid=2ce38300-f67f-48af-81f9-d51a7c54746d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:28:36.494 08:25:09 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:28:36.494 08:25:09 -- common/autotest_common.sh@1177 -- # local i=0 00:28:36.494 08:25:09 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:28:36.494 08:25:09 -- common/autotest_common.sh@1179 -- # [[ -n 4 ]] 00:28:36.494 08:25:09 -- common/autotest_common.sh@1180 -- # nvme_device_counter=4 00:28:36.494 08:25:09 -- common/autotest_common.sh@1184 -- # sleep 2 00:28:38.478 08:25:11 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:28:38.478 08:25:11 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:28:38.478 08:25:11 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:28:38.478 08:25:11 -- common/autotest_common.sh@1186 -- # nvme_devices=4 00:28:38.478 08:25:11 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:28:38.478 08:25:11 -- common/autotest_common.sh@1187 -- # return 0 00:28:38.478 08:25:11 -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:28:38.478 [global] 00:28:38.478 thread=1 00:28:38.478 invalidate=1 00:28:38.478 rw=write 00:28:38.478 time_based=1 00:28:38.478 runtime=1 00:28:38.478 ioengine=libaio 00:28:38.478 direct=1 00:28:38.478 bs=4096 00:28:38.478 iodepth=1 00:28:38.478 norandommap=0 00:28:38.478 numjobs=1 00:28:38.478 00:28:38.478 verify_dump=1 00:28:38.478 verify_backlog=512 00:28:38.478 verify_state_save=0 00:28:38.478 do_verify=1 00:28:38.478 verify=crc32c-intel 00:28:38.478 [job0] 00:28:38.478 filename=/dev/nvme0n1 00:28:38.478 [job1] 00:28:38.478 filename=/dev/nvme0n2 00:28:38.478 [job2] 00:28:38.478 filename=/dev/nvme0n3 00:28:38.478 [job3] 00:28:38.478 filename=/dev/nvme0n4 00:28:38.478 Could not set queue depth (nvme0n1) 00:28:38.478 Could not set queue depth (nvme0n2) 00:28:38.478 Could not set queue depth (nvme0n3) 00:28:38.478 Could not set queue depth (nvme0n4) 00:28:38.744 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:28:38.744 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:28:38.744 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:28:38.744 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:28:38.744 fio-3.35 00:28:38.744 Starting 4 threads 00:28:39.804 00:28:39.804 job0: (groupid=0, jobs=1): err= 0: pid=63794: Wed Apr 17 08:25:13 2024 00:28:39.804 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:28:39.804 slat (nsec): min=5675, max=37319, avg=10270.08, stdev=4431.61 00:28:39.804 clat (usec): min=243, max=548, avg=356.96, stdev=57.72 00:28:39.804 lat (usec): min=250, max=568, avg=367.23, stdev=59.81 00:28:39.804 clat percentiles (usec): 00:28:39.804 | 1.00th=[ 269], 5.00th=[ 293], 10.00th=[ 302], 20.00th=[ 310], 00:28:39.804 | 30.00th=[ 318], 40.00th=[ 326], 50.00th=[ 338], 60.00th=[ 363], 00:28:39.804 | 70.00th=[ 383], 80.00th=[ 404], 90.00th=[ 449], 95.00th=[ 482], 00:28:39.804 | 99.00th=[ 510], 99.50th=[ 523], 99.90th=[ 545], 99.95th=[ 545], 00:28:39.804 | 99.99th=[ 545] 00:28:39.804 write: IOPS=1589, BW=6358KiB/s (6510kB/s)(6364KiB/1001msec); 0 zone resets 00:28:39.804 slat (usec): min=7, max=380, avg=21.55, stdev=14.46 00:28:39.804 clat (usec): min=139, max=419, avg=249.47, stdev=63.57 00:28:39.804 lat (usec): min=161, max=771, avg=271.01, stdev=71.91 00:28:39.804 clat percentiles (usec): 00:28:39.804 | 1.00th=[ 153], 5.00th=[ 174], 10.00th=[ 180], 20.00th=[ 192], 00:28:39.804 | 30.00th=[ 202], 40.00th=[ 225], 50.00th=[ 239], 60.00th=[ 251], 00:28:39.804 | 70.00th=[ 269], 80.00th=[ 314], 90.00th=[ 355], 95.00th=[ 375], 00:28:39.804 | 99.00th=[ 400], 99.50th=[ 404], 99.90th=[ 412], 99.95th=[ 420], 00:28:39.804 | 99.99th=[ 420] 00:28:39.804 bw ( KiB/s): min= 8192, max= 8192, per=24.21%, avg=8192.00, stdev= 0.00, samples=1 00:28:39.804 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:28:39.804 lat (usec) : 250=30.51%, 500=68.66%, 750=0.83% 00:28:39.804 cpu : usr=1.00%, sys=4.20%, ctx=3129, majf=0, minf=15 00:28:39.804 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:39.804 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:39.804 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:39.804 issued rwts: total=1536,1591,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:39.804 latency : target=0, window=0, percentile=100.00%, depth=1 00:28:39.804 job1: (groupid=0, jobs=1): err= 0: pid=63795: Wed Apr 17 08:25:13 2024 00:28:39.804 read: IOPS=1580, BW=6322KiB/s (6473kB/s)(6328KiB/1001msec) 00:28:39.804 slat (nsec): min=9787, max=33841, avg=12411.58, stdev=2622.05 00:28:39.804 clat (usec): min=152, max=2399, avg=360.57, stdev=109.69 00:28:39.804 lat (usec): min=165, max=2412, avg=372.98, stdev=110.39 00:28:39.804 clat percentiles (usec): 00:28:39.804 | 1.00th=[ 165], 5.00th=[ 277], 10.00th=[ 293], 20.00th=[ 302], 00:28:39.804 | 30.00th=[ 310], 40.00th=[ 314], 50.00th=[ 322], 60.00th=[ 330], 00:28:39.804 | 70.00th=[ 396], 80.00th=[ 445], 90.00th=[ 474], 95.00th=[ 594], 00:28:39.804 | 99.00th=[ 660], 99.50th=[ 668], 99.90th=[ 742], 99.95th=[ 2409], 00:28:39.804 | 99.99th=[ 2409] 00:28:39.804 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:28:39.804 slat (usec): min=13, max=120, avg=18.70, stdev= 6.35 00:28:39.804 clat (usec): min=86, max=347, avg=179.12, stdev=56.51 00:28:39.804 lat (usec): min=102, max=363, avg=197.81, stdev=56.53 00:28:39.804 clat percentiles (usec): 00:28:39.804 | 1.00th=[ 96], 5.00th=[ 103], 10.00th=[ 109], 20.00th=[ 116], 00:28:39.804 | 30.00th=[ 127], 40.00th=[ 159], 50.00th=[ 178], 60.00th=[ 200], 00:28:39.804 | 70.00th=[ 229], 80.00th=[ 239], 90.00th=[ 251], 95.00th=[ 260], 00:28:39.804 | 99.00th=[ 285], 99.50th=[ 293], 99.90th=[ 306], 99.95th=[ 322], 00:28:39.804 | 99.99th=[ 347] 00:28:39.804 bw ( KiB/s): min= 8192, max= 8192, per=24.21%, avg=8192.00, stdev= 0.00, samples=1 00:28:39.804 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:28:39.804 lat (usec) : 100=1.79%, 250=50.52%, 500=44.55%, 750=3.11% 00:28:39.804 lat (msec) : 4=0.03% 00:28:39.804 cpu : usr=0.90%, sys=4.50%, ctx=3630, majf=0, minf=2 00:28:39.805 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:39.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:39.805 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:39.805 issued rwts: total=1582,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:39.805 latency : target=0, window=0, percentile=100.00%, depth=1 00:28:39.805 job2: (groupid=0, jobs=1): err= 0: pid=63796: Wed Apr 17 08:25:13 2024 00:28:39.805 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:28:39.805 slat (nsec): min=7443, max=58782, avg=14596.11, stdev=8123.70 00:28:39.805 clat (usec): min=239, max=538, avg=352.16, stdev=55.33 00:28:39.805 lat (usec): min=248, max=566, avg=366.76, stdev=59.45 00:28:39.805 clat percentiles (usec): 00:28:39.805 | 1.00th=[ 269], 5.00th=[ 289], 10.00th=[ 297], 20.00th=[ 306], 00:28:39.805 | 30.00th=[ 314], 40.00th=[ 322], 50.00th=[ 334], 60.00th=[ 359], 00:28:39.805 | 70.00th=[ 379], 80.00th=[ 400], 90.00th=[ 437], 95.00th=[ 465], 00:28:39.805 | 99.00th=[ 494], 99.50th=[ 506], 99.90th=[ 537], 99.95th=[ 537], 00:28:39.805 | 99.99th=[ 537] 00:28:39.805 write: IOPS=1591, BW=6366KiB/s (6518kB/s)(6372KiB/1001msec); 0 zone resets 00:28:39.805 slat (usec): min=7, max=112, avg=23.87, stdev=14.40 00:28:39.805 clat (usec): min=108, max=435, avg=246.67, stdev=59.91 00:28:39.805 lat (usec): min=139, max=455, avg=270.55, stdev=70.58 00:28:39.805 clat percentiles (usec): 00:28:39.805 | 1.00th=[ 157], 5.00th=[ 172], 10.00th=[ 180], 20.00th=[ 190], 00:28:39.805 | 30.00th=[ 204], 40.00th=[ 223], 50.00th=[ 237], 60.00th=[ 251], 00:28:39.805 | 70.00th=[ 265], 80.00th=[ 314], 90.00th=[ 343], 95.00th=[ 359], 00:28:39.805 | 99.00th=[ 383], 99.50th=[ 392], 99.90th=[ 412], 99.95th=[ 437], 00:28:39.805 | 99.99th=[ 437] 00:28:39.805 bw ( KiB/s): min= 8192, max= 8192, per=24.21%, avg=8192.00, stdev= 0.00, samples=1 00:28:39.805 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:28:39.805 lat (usec) : 250=30.58%, 500=69.16%, 750=0.26% 00:28:39.805 cpu : usr=1.30%, sys=5.10%, ctx=3129, majf=0, minf=11 00:28:39.805 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:39.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:39.805 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:39.805 issued rwts: total=1536,1593,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:39.805 latency : target=0, window=0, percentile=100.00%, depth=1 00:28:39.805 job3: (groupid=0, jobs=1): err= 0: pid=63797: Wed Apr 17 08:25:13 2024 00:28:39.805 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:28:39.805 slat (nsec): min=7381, max=27526, avg=8792.29, stdev=1658.29 00:28:39.805 clat (usec): min=135, max=2035, avg=166.75, stdev=35.83 00:28:39.805 lat (usec): min=143, max=2044, avg=175.54, stdev=35.88 00:28:39.805 clat percentiles (usec): 00:28:39.805 | 1.00th=[ 145], 5.00th=[ 151], 10.00th=[ 153], 20.00th=[ 157], 00:28:39.805 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 165], 60.00th=[ 167], 00:28:39.805 | 70.00th=[ 172], 80.00th=[ 176], 90.00th=[ 182], 95.00th=[ 188], 00:28:39.805 | 99.00th=[ 200], 99.50th=[ 210], 99.90th=[ 260], 99.95th=[ 347], 00:28:39.805 | 99.99th=[ 2040] 00:28:39.805 write: IOPS=3231, BW=12.6MiB/s (13.2MB/s)(12.6MiB/1001msec); 0 zone resets 00:28:39.805 slat (usec): min=9, max=147, avg=15.59, stdev= 8.89 00:28:39.805 clat (usec): min=89, max=7696, avg=124.50, stdev=135.82 00:28:39.805 lat (usec): min=101, max=7708, avg=140.09, stdev=136.40 00:28:39.805 clat percentiles (usec): 00:28:39.805 | 1.00th=[ 96], 5.00th=[ 102], 10.00th=[ 105], 20.00th=[ 110], 00:28:39.805 | 30.00th=[ 113], 40.00th=[ 116], 50.00th=[ 119], 60.00th=[ 123], 00:28:39.805 | 70.00th=[ 127], 80.00th=[ 133], 90.00th=[ 143], 95.00th=[ 151], 00:28:39.805 | 99.00th=[ 169], 99.50th=[ 178], 99.90th=[ 465], 99.95th=[ 963], 00:28:39.805 | 99.99th=[ 7701] 00:28:39.805 bw ( KiB/s): min=12688, max=12688, per=37.50%, avg=12688.00, stdev= 0.00, samples=1 00:28:39.805 iops : min= 3172, max= 3172, avg=3172.00, stdev= 0.00, samples=1 00:28:39.805 lat (usec) : 100=1.93%, 250=97.91%, 500=0.10%, 1000=0.03% 00:28:39.805 lat (msec) : 4=0.02%, 10=0.02% 00:28:39.805 cpu : usr=1.50%, sys=6.00%, ctx=6307, majf=0, minf=7 00:28:39.805 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:39.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:39.805 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:39.805 issued rwts: total=3072,3235,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:39.805 latency : target=0, window=0, percentile=100.00%, depth=1 00:28:39.805 00:28:39.805 Run status group 0 (all jobs): 00:28:39.805 READ: bw=30.1MiB/s (31.6MB/s), 6138KiB/s-12.0MiB/s (6285kB/s-12.6MB/s), io=30.2MiB (31.6MB), run=1001-1001msec 00:28:39.805 WRITE: bw=33.0MiB/s (34.6MB/s), 6358KiB/s-12.6MiB/s (6510kB/s-13.2MB/s), io=33.1MiB (34.7MB), run=1001-1001msec 00:28:39.805 00:28:39.805 Disk stats (read/write): 00:28:39.805 nvme0n1: ios=1293/1536, merge=0/0, ticks=438/366, in_queue=804, util=89.58% 00:28:39.805 nvme0n2: ios=1585/1546, merge=0/0, ticks=580/294, in_queue=874, util=89.61% 00:28:39.805 nvme0n3: ios=1265/1536, merge=0/0, ticks=463/381, in_queue=844, util=90.20% 00:28:39.805 nvme0n4: ios=2560/2990, merge=0/0, ticks=432/382, in_queue=814, util=89.54% 00:28:39.805 08:25:13 -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:28:39.805 [global] 00:28:39.805 thread=1 00:28:39.805 invalidate=1 00:28:39.805 rw=randwrite 00:28:39.805 time_based=1 00:28:39.805 runtime=1 00:28:39.805 ioengine=libaio 00:28:39.805 direct=1 00:28:39.805 bs=4096 00:28:39.805 iodepth=1 00:28:39.805 norandommap=0 00:28:39.805 numjobs=1 00:28:39.805 00:28:39.805 verify_dump=1 00:28:39.805 verify_backlog=512 00:28:39.805 verify_state_save=0 00:28:39.805 do_verify=1 00:28:39.805 verify=crc32c-intel 00:28:39.805 [job0] 00:28:39.805 filename=/dev/nvme0n1 00:28:39.805 [job1] 00:28:39.805 filename=/dev/nvme0n2 00:28:39.805 [job2] 00:28:39.805 filename=/dev/nvme0n3 00:28:39.805 [job3] 00:28:39.805 filename=/dev/nvme0n4 00:28:40.089 Could not set queue depth (nvme0n1) 00:28:40.089 Could not set queue depth (nvme0n2) 00:28:40.089 Could not set queue depth (nvme0n3) 00:28:40.089 Could not set queue depth (nvme0n4) 00:28:40.089 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:28:40.089 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:28:40.089 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:28:40.089 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:28:40.089 fio-3.35 00:28:40.089 Starting 4 threads 00:28:41.058 00:28:41.058 job0: (groupid=0, jobs=1): err= 0: pid=63850: Wed Apr 17 08:25:14 2024 00:28:41.058 read: IOPS=3344, BW=13.1MiB/s (13.7MB/s)(13.1MiB/1001msec) 00:28:41.058 slat (nsec): min=6492, max=23297, avg=8141.14, stdev=1723.07 00:28:41.058 clat (usec): min=120, max=228, avg=149.03, stdev=11.99 00:28:41.058 lat (usec): min=127, max=235, avg=157.17, stdev=12.32 00:28:41.058 clat percentiles (usec): 00:28:41.058 | 1.00th=[ 126], 5.00th=[ 133], 10.00th=[ 135], 20.00th=[ 139], 00:28:41.058 | 30.00th=[ 143], 40.00th=[ 145], 50.00th=[ 149], 60.00th=[ 151], 00:28:41.058 | 70.00th=[ 155], 80.00th=[ 159], 90.00th=[ 165], 95.00th=[ 172], 00:28:41.058 | 99.00th=[ 182], 99.50th=[ 188], 99.90th=[ 202], 99.95th=[ 217], 00:28:41.058 | 99.99th=[ 229] 00:28:41.058 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:28:41.058 slat (usec): min=8, max=232, avg=14.69, stdev= 9.53 00:28:41.058 clat (usec): min=74, max=1438, avg=115.26, stdev=29.41 00:28:41.058 lat (usec): min=85, max=1449, avg=129.95, stdev=32.23 00:28:41.058 clat percentiles (usec): 00:28:41.058 | 1.00th=[ 89], 5.00th=[ 95], 10.00th=[ 99], 20.00th=[ 103], 00:28:41.058 | 30.00th=[ 108], 40.00th=[ 111], 50.00th=[ 114], 60.00th=[ 117], 00:28:41.058 | 70.00th=[ 121], 80.00th=[ 126], 90.00th=[ 133], 95.00th=[ 141], 00:28:41.058 | 99.00th=[ 157], 99.50th=[ 169], 99.90th=[ 265], 99.95th=[ 799], 00:28:41.058 | 99.99th=[ 1434] 00:28:41.058 bw ( KiB/s): min=15280, max=15280, per=34.98%, avg=15280.00, stdev= 0.00, samples=1 00:28:41.058 iops : min= 3820, max= 3820, avg=3820.00, stdev= 0.00, samples=1 00:28:41.058 lat (usec) : 100=6.92%, 250=93.02%, 500=0.01%, 750=0.01%, 1000=0.01% 00:28:41.058 lat (msec) : 2=0.01% 00:28:41.058 cpu : usr=1.70%, sys=6.30%, ctx=6932, majf=0, minf=10 00:28:41.058 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:41.058 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.058 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.058 issued rwts: total=3348,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.058 latency : target=0, window=0, percentile=100.00%, depth=1 00:28:41.058 job1: (groupid=0, jobs=1): err= 0: pid=63851: Wed Apr 17 08:25:14 2024 00:28:41.058 read: IOPS=1996, BW=7984KiB/s (8176kB/s)(7992KiB/1001msec) 00:28:41.058 slat (usec): min=6, max=125, avg=12.96, stdev= 9.28 00:28:41.058 clat (usec): min=149, max=1467, avg=270.26, stdev=58.24 00:28:41.058 lat (usec): min=161, max=1478, avg=283.22, stdev=60.58 00:28:41.058 clat percentiles (usec): 00:28:41.058 | 1.00th=[ 204], 5.00th=[ 223], 10.00th=[ 231], 20.00th=[ 241], 00:28:41.058 | 30.00th=[ 247], 40.00th=[ 253], 50.00th=[ 260], 60.00th=[ 265], 00:28:41.058 | 70.00th=[ 273], 80.00th=[ 285], 90.00th=[ 310], 95.00th=[ 367], 00:28:41.058 | 99.00th=[ 506], 99.50th=[ 523], 99.90th=[ 619], 99.95th=[ 1467], 00:28:41.058 | 99.99th=[ 1467] 00:28:41.058 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:28:41.058 slat (usec): min=10, max=133, avg=19.70, stdev=11.24 00:28:41.058 clat (usec): min=92, max=363, avg=189.02, stdev=36.04 00:28:41.058 lat (usec): min=108, max=473, avg=208.72, stdev=38.72 00:28:41.058 clat percentiles (usec): 00:28:41.058 | 1.00th=[ 103], 5.00th=[ 115], 10.00th=[ 126], 20.00th=[ 169], 00:28:41.058 | 30.00th=[ 182], 40.00th=[ 190], 50.00th=[ 196], 60.00th=[ 202], 00:28:41.058 | 70.00th=[ 206], 80.00th=[ 215], 90.00th=[ 227], 95.00th=[ 237], 00:28:41.058 | 99.00th=[ 265], 99.50th=[ 285], 99.90th=[ 330], 99.95th=[ 338], 00:28:41.058 | 99.99th=[ 363] 00:28:41.058 bw ( KiB/s): min= 8400, max= 8400, per=19.23%, avg=8400.00, stdev= 0.00, samples=1 00:28:41.058 iops : min= 2100, max= 2100, avg=2100.00, stdev= 0.00, samples=1 00:28:41.058 lat (usec) : 100=0.22%, 250=66.96%, 500=32.20%, 750=0.59% 00:28:41.058 lat (msec) : 2=0.02% 00:28:41.058 cpu : usr=1.30%, sys=5.20%, ctx=4047, majf=0, minf=5 00:28:41.058 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:41.058 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.058 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.058 issued rwts: total=1998,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.058 latency : target=0, window=0, percentile=100.00%, depth=1 00:28:41.058 job2: (groupid=0, jobs=1): err= 0: pid=63852: Wed Apr 17 08:25:14 2024 00:28:41.058 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:28:41.058 slat (nsec): min=6837, max=89989, avg=8810.06, stdev=2854.22 00:28:41.058 clat (usec): min=128, max=2220, avg=163.07, stdev=39.93 00:28:41.058 lat (usec): min=135, max=2227, avg=171.88, stdev=40.11 00:28:41.058 clat percentiles (usec): 00:28:41.058 | 1.00th=[ 139], 5.00th=[ 145], 10.00th=[ 147], 20.00th=[ 151], 00:28:41.058 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 161], 60.00th=[ 165], 00:28:41.058 | 70.00th=[ 167], 80.00th=[ 174], 90.00th=[ 180], 95.00th=[ 186], 00:28:41.058 | 99.00th=[ 202], 99.50th=[ 212], 99.90th=[ 265], 99.95th=[ 449], 00:28:41.058 | 99.99th=[ 2212] 00:28:41.058 write: IOPS=3249, BW=12.7MiB/s (13.3MB/s)(12.7MiB/1001msec); 0 zone resets 00:28:41.058 slat (usec): min=8, max=228, avg=16.29, stdev=11.05 00:28:41.058 clat (usec): min=90, max=466, avg=126.34, stdev=19.49 00:28:41.058 lat (usec): min=102, max=533, avg=142.63, stdev=26.63 00:28:41.058 clat percentiles (usec): 00:28:41.058 | 1.00th=[ 99], 5.00th=[ 104], 10.00th=[ 109], 20.00th=[ 114], 00:28:41.058 | 30.00th=[ 117], 40.00th=[ 120], 50.00th=[ 124], 60.00th=[ 127], 00:28:41.058 | 70.00th=[ 133], 80.00th=[ 137], 90.00th=[ 147], 95.00th=[ 157], 00:28:41.058 | 99.00th=[ 192], 99.50th=[ 204], 99.90th=[ 243], 99.95th=[ 420], 00:28:41.058 | 99.99th=[ 465] 00:28:41.058 bw ( KiB/s): min=12288, max=12288, per=28.13%, avg=12288.00, stdev= 0.00, samples=1 00:28:41.058 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:28:41.058 lat (usec) : 100=0.79%, 250=99.08%, 500=0.11% 00:28:41.058 lat (msec) : 4=0.02% 00:28:41.058 cpu : usr=1.50%, sys=6.50%, ctx=6326, majf=0, minf=15 00:28:41.058 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:41.058 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.058 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.058 issued rwts: total=3072,3253,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.058 latency : target=0, window=0, percentile=100.00%, depth=1 00:28:41.058 job3: (groupid=0, jobs=1): err= 0: pid=63853: Wed Apr 17 08:25:14 2024 00:28:41.058 read: IOPS=1906, BW=7624KiB/s (7807kB/s)(7632KiB/1001msec) 00:28:41.058 slat (nsec): min=6873, max=80238, avg=12246.23, stdev=7954.08 00:28:41.058 clat (usec): min=149, max=683, avg=265.02, stdev=39.74 00:28:41.059 lat (usec): min=158, max=696, avg=277.26, stdev=42.60 00:28:41.059 clat percentiles (usec): 00:28:41.059 | 1.00th=[ 194], 5.00th=[ 223], 10.00th=[ 231], 20.00th=[ 241], 00:28:41.059 | 30.00th=[ 247], 40.00th=[ 253], 50.00th=[ 260], 60.00th=[ 265], 00:28:41.059 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 302], 95.00th=[ 338], 00:28:41.059 | 99.00th=[ 424], 99.50th=[ 469], 99.90th=[ 570], 99.95th=[ 685], 00:28:41.059 | 99.99th=[ 685] 00:28:41.059 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:28:41.059 slat (usec): min=11, max=127, avg=20.95, stdev=13.14 00:28:41.059 clat (usec): min=104, max=610, avg=205.86, stdev=43.34 00:28:41.059 lat (usec): min=117, max=639, avg=226.81, stdev=51.30 00:28:41.059 clat percentiles (usec): 00:28:41.059 | 1.00th=[ 123], 5.00th=[ 137], 10.00th=[ 159], 20.00th=[ 182], 00:28:41.059 | 30.00th=[ 190], 40.00th=[ 196], 50.00th=[ 202], 60.00th=[ 208], 00:28:41.059 | 70.00th=[ 215], 80.00th=[ 225], 90.00th=[ 253], 95.00th=[ 302], 00:28:41.059 | 99.00th=[ 343], 99.50th=[ 359], 99.90th=[ 400], 99.95th=[ 420], 00:28:41.059 | 99.99th=[ 611] 00:28:41.059 bw ( KiB/s): min= 8344, max= 8344, per=19.10%, avg=8344.00, stdev= 0.00, samples=1 00:28:41.059 iops : min= 2086, max= 2086, avg=2086.00, stdev= 0.00, samples=1 00:28:41.059 lat (usec) : 250=63.42%, 500=36.43%, 750=0.15% 00:28:41.059 cpu : usr=1.60%, sys=5.00%, ctx=3956, majf=0, minf=15 00:28:41.059 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:41.059 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.059 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.059 issued rwts: total=1908,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.059 latency : target=0, window=0, percentile=100.00%, depth=1 00:28:41.059 00:28:41.059 Run status group 0 (all jobs): 00:28:41.059 READ: bw=40.3MiB/s (42.3MB/s), 7624KiB/s-13.1MiB/s (7807kB/s-13.7MB/s), io=40.3MiB (42.3MB), run=1001-1001msec 00:28:41.059 WRITE: bw=42.7MiB/s (44.7MB/s), 8184KiB/s-14.0MiB/s (8380kB/s-14.7MB/s), io=42.7MiB (44.8MB), run=1001-1001msec 00:28:41.059 00:28:41.059 Disk stats (read/write): 00:28:41.059 nvme0n1: ios=2981/3072, merge=0/0, ticks=456/376, in_queue=832, util=88.68% 00:28:41.059 nvme0n2: ios=1627/2048, merge=0/0, ticks=435/406, in_queue=841, util=89.09% 00:28:41.059 nvme0n3: ios=2560/2953, merge=0/0, ticks=423/389, in_queue=812, util=89.43% 00:28:41.059 nvme0n4: ios=1536/1921, merge=0/0, ticks=411/416, in_queue=827, util=89.89% 00:28:41.319 08:25:14 -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:28:41.319 [global] 00:28:41.319 thread=1 00:28:41.319 invalidate=1 00:28:41.319 rw=write 00:28:41.319 time_based=1 00:28:41.319 runtime=1 00:28:41.319 ioengine=libaio 00:28:41.319 direct=1 00:28:41.319 bs=4096 00:28:41.319 iodepth=128 00:28:41.319 norandommap=0 00:28:41.319 numjobs=1 00:28:41.319 00:28:41.319 verify_dump=1 00:28:41.319 verify_backlog=512 00:28:41.319 verify_state_save=0 00:28:41.319 do_verify=1 00:28:41.319 verify=crc32c-intel 00:28:41.319 [job0] 00:28:41.319 filename=/dev/nvme0n1 00:28:41.319 [job1] 00:28:41.319 filename=/dev/nvme0n2 00:28:41.319 [job2] 00:28:41.319 filename=/dev/nvme0n3 00:28:41.319 [job3] 00:28:41.319 filename=/dev/nvme0n4 00:28:41.319 Could not set queue depth (nvme0n1) 00:28:41.319 Could not set queue depth (nvme0n2) 00:28:41.319 Could not set queue depth (nvme0n3) 00:28:41.319 Could not set queue depth (nvme0n4) 00:28:41.319 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:28:41.319 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:28:41.319 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:28:41.319 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:28:41.319 fio-3.35 00:28:41.319 Starting 4 threads 00:28:42.699 00:28:42.699 job0: (groupid=0, jobs=1): err= 0: pid=63912: Wed Apr 17 08:25:15 2024 00:28:42.699 read: IOPS=4720, BW=18.4MiB/s (19.3MB/s)(18.5MiB/1003msec) 00:28:42.699 slat (usec): min=4, max=6014, avg=96.95, stdev=426.91 00:28:42.699 clat (usec): min=467, max=19367, avg=12823.68, stdev=1616.66 00:28:42.699 lat (usec): min=2643, max=22747, avg=12920.63, stdev=1629.17 00:28:42.699 clat percentiles (usec): 00:28:42.699 | 1.00th=[ 5604], 5.00th=[10683], 10.00th=[11469], 20.00th=[12125], 00:28:42.699 | 30.00th=[12518], 40.00th=[12780], 50.00th=[12911], 60.00th=[13042], 00:28:42.699 | 70.00th=[13304], 80.00th=[13698], 90.00th=[14222], 95.00th=[14615], 00:28:42.699 | 99.00th=[16909], 99.50th=[18220], 99.90th=[18482], 99.95th=[18744], 00:28:42.699 | 99.99th=[19268] 00:28:42.699 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:28:42.699 slat (usec): min=10, max=6057, avg=96.88, stdev=465.85 00:28:42.699 clat (usec): min=5316, max=19996, avg=12882.72, stdev=1373.17 00:28:42.699 lat (usec): min=5341, max=20006, avg=12979.60, stdev=1439.29 00:28:42.699 clat percentiles (usec): 00:28:42.699 | 1.00th=[ 9110], 5.00th=[10552], 10.00th=[11207], 20.00th=[11863], 00:28:42.699 | 30.00th=[12387], 40.00th=[12649], 50.00th=[13042], 60.00th=[13304], 00:28:42.699 | 70.00th=[13435], 80.00th=[13698], 90.00th=[14091], 95.00th=[15008], 00:28:42.699 | 99.00th=[17433], 99.50th=[18220], 99.90th=[19530], 99.95th=[20055], 00:28:42.699 | 99.99th=[20055] 00:28:42.699 bw ( KiB/s): min=20472, max=20521, per=27.86%, avg=20496.50, stdev=34.65, samples=2 00:28:42.699 iops : min= 5118, max= 5130, avg=5124.00, stdev= 8.49, samples=2 00:28:42.699 lat (usec) : 500=0.01% 00:28:42.699 lat (msec) : 4=0.37%, 10=2.49%, 20=97.14% 00:28:42.699 cpu : usr=4.79%, sys=19.16%, ctx=355, majf=0, minf=2 00:28:42.699 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:28:42.699 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:42.699 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:42.699 issued rwts: total=4735,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:42.699 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:42.699 job1: (groupid=0, jobs=1): err= 0: pid=63913: Wed Apr 17 08:25:15 2024 00:28:42.699 read: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec) 00:28:42.699 slat (usec): min=4, max=4477, avg=97.63, stdev=410.51 00:28:42.699 clat (usec): min=10181, max=18445, avg=13143.04, stdev=1175.43 00:28:42.699 lat (usec): min=10196, max=18471, avg=13240.67, stdev=1203.50 00:28:42.699 clat percentiles (usec): 00:28:42.700 | 1.00th=[10552], 5.00th=[11338], 10.00th=[11731], 20.00th=[12256], 00:28:42.700 | 30.00th=[12518], 40.00th=[12780], 50.00th=[13042], 60.00th=[13173], 00:28:42.700 | 70.00th=[13566], 80.00th=[14091], 90.00th=[14877], 95.00th=[15270], 00:28:42.700 | 99.00th=[16057], 99.50th=[16712], 99.90th=[17695], 99.95th=[17957], 00:28:42.700 | 99.99th=[18482] 00:28:42.700 write: IOPS=4835, BW=18.9MiB/s (19.8MB/s)(18.9MiB/1002msec); 0 zone resets 00:28:42.700 slat (usec): min=6, max=18148, avg=105.27, stdev=528.02 00:28:42.700 clat (usec): min=211, max=31817, avg=13621.37, stdev=2977.12 00:28:42.700 lat (usec): min=3355, max=31863, avg=13726.64, stdev=3014.51 00:28:42.700 clat percentiles (usec): 00:28:42.700 | 1.00th=[ 8225], 5.00th=[11469], 10.00th=[11863], 20.00th=[12387], 00:28:42.700 | 30.00th=[12780], 40.00th=[13042], 50.00th=[13304], 60.00th=[13566], 00:28:42.700 | 70.00th=[13698], 80.00th=[13960], 90.00th=[14484], 95.00th=[18482], 00:28:42.700 | 99.00th=[28967], 99.50th=[28967], 99.90th=[28967], 99.95th=[31327], 00:28:42.700 | 99.99th=[31851] 00:28:42.700 bw ( KiB/s): min=18288, max=19448, per=25.65%, avg=18868.00, stdev=820.24, samples=2 00:28:42.700 iops : min= 4572, max= 4862, avg=4717.00, stdev=205.06, samples=2 00:28:42.700 lat (usec) : 250=0.01% 00:28:42.700 lat (msec) : 4=0.19%, 10=0.71%, 20=97.73%, 50=1.36% 00:28:42.700 cpu : usr=4.79%, sys=16.67%, ctx=354, majf=0, minf=3 00:28:42.700 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:28:42.700 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:42.700 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:42.700 issued rwts: total=4608,4845,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:42.700 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:42.700 job2: (groupid=0, jobs=1): err= 0: pid=63914: Wed Apr 17 08:25:15 2024 00:28:42.700 read: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec) 00:28:42.700 slat (usec): min=5, max=6767, avg=113.16, stdev=520.54 00:28:42.700 clat (usec): min=10667, max=20725, avg=15223.13, stdev=1101.01 00:28:42.700 lat (usec): min=13466, max=20885, avg=15336.29, stdev=980.56 00:28:42.700 clat percentiles (usec): 00:28:42.700 | 1.00th=[12125], 5.00th=[13960], 10.00th=[14222], 20.00th=[14615], 00:28:42.700 | 30.00th=[14877], 40.00th=[15008], 50.00th=[15270], 60.00th=[15401], 00:28:42.700 | 70.00th=[15533], 80.00th=[15664], 90.00th=[15795], 95.00th=[16188], 00:28:42.700 | 99.00th=[20317], 99.50th=[20579], 99.90th=[20579], 99.95th=[20841], 00:28:42.700 | 99.99th=[20841] 00:28:42.700 write: IOPS=4248, BW=16.6MiB/s (17.4MB/s)(16.6MiB/1002msec); 0 zone resets 00:28:42.700 slat (usec): min=9, max=3178, avg=116.29, stdev=450.33 00:28:42.700 clat (usec): min=227, max=16283, avg=15015.42, stdev=1490.40 00:28:42.700 lat (usec): min=2944, max=16322, avg=15131.71, stdev=1421.08 00:28:42.700 clat percentiles (usec): 00:28:42.700 | 1.00th=[ 6980], 5.00th=[13173], 10.00th=[14091], 20.00th=[14746], 00:28:42.700 | 30.00th=[15008], 40.00th=[15139], 50.00th=[15401], 60.00th=[15533], 00:28:42.700 | 70.00th=[15664], 80.00th=[15795], 90.00th=[15926], 95.00th=[16057], 00:28:42.700 | 99.00th=[16188], 99.50th=[16319], 99.90th=[16319], 99.95th=[16319], 00:28:42.700 | 99.99th=[16319] 00:28:42.700 bw ( KiB/s): min=16384, max=16681, per=22.47%, avg=16532.50, stdev=210.01, samples=2 00:28:42.700 iops : min= 4096, max= 4170, avg=4133.00, stdev=52.33, samples=2 00:28:42.700 lat (usec) : 250=0.01% 00:28:42.700 lat (msec) : 4=0.35%, 10=0.42%, 20=98.48%, 50=0.74% 00:28:42.700 cpu : usr=4.60%, sys=17.28%, ctx=271, majf=0, minf=3 00:28:42.700 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:28:42.700 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:42.700 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:42.700 issued rwts: total=4096,4257,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:42.700 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:42.700 job3: (groupid=0, jobs=1): err= 0: pid=63915: Wed Apr 17 08:25:15 2024 00:28:42.700 read: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec) 00:28:42.700 slat (usec): min=7, max=4270, avg=109.99, stdev=483.62 00:28:42.700 clat (usec): min=9914, max=20077, avg=14966.90, stdev=987.45 00:28:42.700 lat (usec): min=12144, max=20163, avg=15076.89, stdev=892.52 00:28:42.700 clat percentiles (usec): 00:28:42.700 | 1.00th=[11994], 5.00th=[13435], 10.00th=[13829], 20.00th=[14353], 00:28:42.700 | 30.00th=[14615], 40.00th=[14877], 50.00th=[15008], 60.00th=[15270], 00:28:42.700 | 70.00th=[15533], 80.00th=[15664], 90.00th=[15926], 95.00th=[16188], 00:28:42.700 | 99.00th=[17433], 99.50th=[17695], 99.90th=[18744], 99.95th=[18744], 00:28:42.700 | 99.99th=[20055] 00:28:42.700 write: IOPS=4220, BW=16.5MiB/s (17.3MB/s)(16.5MiB/1001msec); 0 zone resets 00:28:42.700 slat (usec): min=14, max=4007, avg=118.31, stdev=438.44 00:28:42.700 clat (usec): min=162, max=26118, avg=15376.11, stdev=1906.69 00:28:42.700 lat (usec): min=3013, max=26159, avg=15494.41, stdev=1869.81 00:28:42.700 clat percentiles (usec): 00:28:42.700 | 1.00th=[ 7308], 5.00th=[13304], 10.00th=[14484], 20.00th=[14877], 00:28:42.700 | 30.00th=[15139], 40.00th=[15270], 50.00th=[15401], 60.00th=[15533], 00:28:42.700 | 70.00th=[15664], 80.00th=[15926], 90.00th=[16188], 95.00th=[17433], 00:28:42.700 | 99.00th=[21365], 99.50th=[22938], 99.90th=[26084], 99.95th=[26084], 00:28:42.700 | 99.99th=[26084] 00:28:42.700 bw ( KiB/s): min=16392, max=16416, per=22.30%, avg=16404.00, stdev=16.97, samples=2 00:28:42.700 iops : min= 4098, max= 4104, avg=4101.00, stdev= 4.24, samples=2 00:28:42.700 lat (usec) : 250=0.01% 00:28:42.700 lat (msec) : 4=0.31%, 10=0.47%, 20=97.74%, 50=1.47% 00:28:42.700 cpu : usr=5.20%, sys=18.50%, ctx=359, majf=0, minf=11 00:28:42.700 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:28:42.700 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:42.700 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:42.700 issued rwts: total=4096,4225,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:42.700 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:42.700 00:28:42.700 Run status group 0 (all jobs): 00:28:42.700 READ: bw=68.3MiB/s (71.6MB/s), 16.0MiB/s-18.4MiB/s (16.7MB/s-19.3MB/s), io=68.5MiB (71.8MB), run=1001-1003msec 00:28:42.700 WRITE: bw=71.8MiB/s (75.3MB/s), 16.5MiB/s-19.9MiB/s (17.3MB/s-20.9MB/s), io=72.1MiB (75.6MB), run=1001-1003msec 00:28:42.700 00:28:42.700 Disk stats (read/write): 00:28:42.700 nvme0n1: ios=4146/4450, merge=0/0, ticks=25289/23011, in_queue=48300, util=89.48% 00:28:42.700 nvme0n2: ios=4145/4144, merge=0/0, ticks=16450/15288, in_queue=31738, util=89.63% 00:28:42.700 nvme0n3: ios=3616/3744, merge=0/0, ticks=12007/11645, in_queue=23652, util=90.13% 00:28:42.700 nvme0n4: ios=3605/3744, merge=0/0, ticks=11634/11579, in_queue=23213, util=90.10% 00:28:42.700 08:25:15 -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:28:42.700 [global] 00:28:42.700 thread=1 00:28:42.700 invalidate=1 00:28:42.700 rw=randwrite 00:28:42.700 time_based=1 00:28:42.700 runtime=1 00:28:42.700 ioengine=libaio 00:28:42.700 direct=1 00:28:42.700 bs=4096 00:28:42.700 iodepth=128 00:28:42.700 norandommap=0 00:28:42.700 numjobs=1 00:28:42.700 00:28:42.700 verify_dump=1 00:28:42.700 verify_backlog=512 00:28:42.700 verify_state_save=0 00:28:42.700 do_verify=1 00:28:42.700 verify=crc32c-intel 00:28:42.700 [job0] 00:28:42.700 filename=/dev/nvme0n1 00:28:42.700 [job1] 00:28:42.700 filename=/dev/nvme0n2 00:28:42.700 [job2] 00:28:42.700 filename=/dev/nvme0n3 00:28:42.700 [job3] 00:28:42.700 filename=/dev/nvme0n4 00:28:42.700 Could not set queue depth (nvme0n1) 00:28:42.700 Could not set queue depth (nvme0n2) 00:28:42.700 Could not set queue depth (nvme0n3) 00:28:42.700 Could not set queue depth (nvme0n4) 00:28:42.960 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:28:42.960 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:28:42.960 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:28:42.960 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:28:42.960 fio-3.35 00:28:42.960 Starting 4 threads 00:28:43.895 00:28:43.895 job0: (groupid=0, jobs=1): err= 0: pid=63974: Wed Apr 17 08:25:17 2024 00:28:43.895 read: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec) 00:28:43.895 slat (usec): min=9, max=20698, avg=152.06, stdev=1036.73 00:28:43.895 clat (usec): min=5939, max=58344, avg=21067.89, stdev=7504.03 00:28:43.895 lat (usec): min=5960, max=58380, avg=21219.95, stdev=7570.68 00:28:43.895 clat percentiles (usec): 00:28:43.895 | 1.00th=[10290], 5.00th=[13304], 10.00th=[13698], 20.00th=[16057], 00:28:43.895 | 30.00th=[18482], 40.00th=[18744], 50.00th=[19006], 60.00th=[19530], 00:28:43.895 | 70.00th=[20055], 80.00th=[23725], 90.00th=[36963], 95.00th=[39584], 00:28:43.895 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[54264], 00:28:43.895 | 99.99th=[58459] 00:28:43.895 write: IOPS=3604, BW=14.1MiB/s (14.8MB/s)(14.1MiB/1003msec); 0 zone resets 00:28:43.895 slat (usec): min=12, max=16792, avg=113.68, stdev=640.15 00:28:43.895 clat (usec): min=2046, max=31637, avg=14191.54, stdev=4400.49 00:28:43.895 lat (usec): min=2077, max=31703, avg=14305.22, stdev=4388.94 00:28:43.895 clat percentiles (usec): 00:28:43.895 | 1.00th=[ 6980], 5.00th=[ 8717], 10.00th=[ 9241], 20.00th=[ 9765], 00:28:43.895 | 30.00th=[10814], 40.00th=[11994], 50.00th=[13304], 60.00th=[15795], 00:28:43.895 | 70.00th=[16450], 80.00th=[18744], 90.00th=[20055], 95.00th=[21365], 00:28:43.895 | 99.00th=[25297], 99.50th=[25822], 99.90th=[26084], 99.95th=[28705], 00:28:43.895 | 99.99th=[31589] 00:28:43.895 bw ( KiB/s): min=13250, max=15448, per=27.46%, avg=14349.00, stdev=1554.22, samples=2 00:28:43.895 iops : min= 3312, max= 3862, avg=3587.00, stdev=388.91, samples=2 00:28:43.895 lat (msec) : 4=0.13%, 10=11.65%, 20=68.18%, 50=20.02%, 100=0.03% 00:28:43.895 cpu : usr=4.69%, sys=17.86%, ctx=153, majf=0, minf=9 00:28:43.895 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:28:43.895 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.895 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:43.895 issued rwts: total=3584,3615,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:43.895 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:43.895 job1: (groupid=0, jobs=1): err= 0: pid=63975: Wed Apr 17 08:25:17 2024 00:28:43.895 read: IOPS=1519, BW=6077KiB/s (6223kB/s)(6144KiB/1011msec) 00:28:43.895 slat (usec): min=6, max=15541, avg=337.57, stdev=1583.69 00:28:43.895 clat (usec): min=21219, max=70703, avg=45923.30, stdev=9766.67 00:28:43.895 lat (usec): min=21239, max=73300, avg=46260.87, stdev=9804.76 00:28:43.895 clat percentiles (usec): 00:28:43.895 | 1.00th=[24511], 5.00th=[34341], 10.00th=[36439], 20.00th=[37487], 00:28:43.895 | 30.00th=[38536], 40.00th=[41681], 50.00th=[44827], 60.00th=[46924], 00:28:43.895 | 70.00th=[50594], 80.00th=[55837], 90.00th=[60556], 95.00th=[63701], 00:28:43.895 | 99.00th=[64226], 99.50th=[66323], 99.90th=[69731], 99.95th=[70779], 00:28:43.895 | 99.99th=[70779] 00:28:43.895 write: IOPS=1691, BW=6766KiB/s (6928kB/s)(6840KiB/1011msec); 0 zone resets 00:28:43.895 slat (usec): min=7, max=15678, avg=271.88, stdev=1333.87 00:28:43.895 clat (usec): min=9519, max=78033, avg=33615.97, stdev=11150.62 00:28:43.895 lat (usec): min=11233, max=78068, avg=33887.85, stdev=11202.90 00:28:43.895 clat percentiles (usec): 00:28:43.895 | 1.00th=[13960], 5.00th=[20317], 10.00th=[21103], 20.00th=[24511], 00:28:43.895 | 30.00th=[27919], 40.00th=[30802], 50.00th=[31851], 60.00th=[34866], 00:28:43.896 | 70.00th=[36963], 80.00th=[38011], 90.00th=[46400], 95.00th=[56361], 00:28:43.896 | 99.00th=[72877], 99.50th=[73925], 99.90th=[78119], 99.95th=[78119], 00:28:43.896 | 99.99th=[78119] 00:28:43.896 bw ( KiB/s): min= 4472, max= 8175, per=12.10%, avg=6323.50, stdev=2618.42, samples=2 00:28:43.896 iops : min= 1118, max= 2043, avg=1580.50, stdev=654.07, samples=2 00:28:43.896 lat (msec) : 10=0.03%, 20=2.22%, 50=77.82%, 100=19.93% 00:28:43.896 cpu : usr=1.78%, sys=7.03%, ctx=285, majf=0, minf=5 00:28:43.896 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:28:43.896 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.896 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:43.896 issued rwts: total=1536,1710,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:43.896 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:43.896 job2: (groupid=0, jobs=1): err= 0: pid=63976: Wed Apr 17 08:25:17 2024 00:28:43.896 read: IOPS=5868, BW=22.9MiB/s (24.0MB/s)(23.0MiB/1002msec) 00:28:43.896 slat (usec): min=7, max=8567, avg=78.25, stdev=407.57 00:28:43.896 clat (usec): min=1038, max=20055, avg=11114.67, stdev=1814.90 00:28:43.896 lat (usec): min=6164, max=20234, avg=11192.93, stdev=1821.15 00:28:43.896 clat percentiles (usec): 00:28:43.896 | 1.00th=[ 7373], 5.00th=[ 8586], 10.00th=[ 9503], 20.00th=[10159], 00:28:43.896 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10683], 60.00th=[10945], 00:28:43.896 | 70.00th=[11469], 80.00th=[12125], 90.00th=[13173], 95.00th=[14091], 00:28:43.896 | 99.00th=[18482], 99.50th=[19006], 99.90th=[19792], 99.95th=[20055], 00:28:43.896 | 99.99th=[20055] 00:28:43.896 write: IOPS=6131, BW=24.0MiB/s (25.1MB/s)(24.0MiB/1002msec); 0 zone resets 00:28:43.896 slat (usec): min=9, max=6477, avg=77.15, stdev=360.17 00:28:43.896 clat (usec): min=3946, max=20037, avg=9986.49, stdev=1705.42 00:28:43.896 lat (usec): min=3976, max=20053, avg=10063.65, stdev=1727.49 00:28:43.896 clat percentiles (usec): 00:28:43.896 | 1.00th=[ 5145], 5.00th=[ 7177], 10.00th=[ 7898], 20.00th=[ 8717], 00:28:43.896 | 30.00th=[ 9241], 40.00th=[ 9634], 50.00th=[10028], 60.00th=[10421], 00:28:43.896 | 70.00th=[10814], 80.00th=[11207], 90.00th=[11863], 95.00th=[12518], 00:28:43.896 | 99.00th=[14353], 99.50th=[14746], 99.90th=[15008], 99.95th=[15926], 00:28:43.896 | 99.99th=[20055] 00:28:43.896 bw ( KiB/s): min=24576, max=24625, per=47.07%, avg=24600.50, stdev=34.65, samples=2 00:28:43.896 iops : min= 6144, max= 6156, avg=6150.00, stdev= 8.49, samples=2 00:28:43.896 lat (msec) : 2=0.01%, 4=0.02%, 10=32.35%, 20=67.56%, 50=0.05% 00:28:43.896 cpu : usr=7.39%, sys=26.67%, ctx=335, majf=0, minf=14 00:28:43.896 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:28:43.896 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.896 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:43.896 issued rwts: total=5880,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:43.896 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:43.896 job3: (groupid=0, jobs=1): err= 0: pid=63977: Wed Apr 17 08:25:17 2024 00:28:43.896 read: IOPS=1523, BW=6095KiB/s (6242kB/s)(6144KiB/1008msec) 00:28:43.896 slat (usec): min=5, max=24321, avg=334.45, stdev=1621.06 00:28:43.896 clat (usec): min=23543, max=74611, avg=45651.32, stdev=10192.46 00:28:43.896 lat (usec): min=23563, max=79499, avg=45985.77, stdev=10258.06 00:28:43.896 clat percentiles (usec): 00:28:43.896 | 1.00th=[30540], 5.00th=[32900], 10.00th=[34341], 20.00th=[36439], 00:28:43.896 | 30.00th=[37487], 40.00th=[39584], 50.00th=[42730], 60.00th=[48497], 00:28:43.896 | 70.00th=[50594], 80.00th=[55313], 90.00th=[62653], 95.00th=[63701], 00:28:43.896 | 99.00th=[66847], 99.50th=[66847], 99.90th=[73925], 99.95th=[74974], 00:28:43.896 | 99.99th=[74974] 00:28:43.896 write: IOPS=1725, BW=6901KiB/s (7066kB/s)(6956KiB/1008msec); 0 zone resets 00:28:43.896 slat (usec): min=8, max=22845, avg=269.90, stdev=1380.88 00:28:43.896 clat (usec): min=6514, max=70328, avg=32231.31, stdev=10400.97 00:28:43.896 lat (usec): min=8402, max=72156, avg=32501.21, stdev=10447.30 00:28:43.896 clat percentiles (usec): 00:28:43.896 | 1.00th=[10290], 5.00th=[19530], 10.00th=[21103], 20.00th=[25297], 00:28:43.896 | 30.00th=[27132], 40.00th=[28181], 50.00th=[30016], 60.00th=[33424], 00:28:43.896 | 70.00th=[36439], 80.00th=[38011], 90.00th=[42730], 95.00th=[54264], 00:28:43.896 | 99.00th=[65799], 99.50th=[66847], 99.90th=[70779], 99.95th=[70779], 00:28:43.896 | 99.99th=[70779] 00:28:43.896 bw ( KiB/s): min= 4704, max= 8192, per=12.34%, avg=6448.00, stdev=2466.39, samples=2 00:28:43.896 iops : min= 1176, max= 2048, avg=1612.00, stdev=616.60, samples=2 00:28:43.896 lat (msec) : 10=0.49%, 20=3.73%, 50=75.91%, 100=19.88% 00:28:43.896 cpu : usr=1.69%, sys=7.65%, ctx=303, majf=0, minf=17 00:28:43.896 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:28:43.896 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.896 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:43.896 issued rwts: total=1536,1739,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:43.896 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:43.896 00:28:43.896 Run status group 0 (all jobs): 00:28:43.896 READ: bw=48.4MiB/s (50.8MB/s), 6077KiB/s-22.9MiB/s (6223kB/s-24.0MB/s), io=49.0MiB (51.3MB), run=1002-1011msec 00:28:43.896 WRITE: bw=51.0MiB/s (53.5MB/s), 6766KiB/s-24.0MiB/s (6928kB/s-25.1MB/s), io=51.6MiB (54.1MB), run=1002-1011msec 00:28:43.896 00:28:43.896 Disk stats (read/write): 00:28:43.896 nvme0n1: ios=2868/3072, merge=0/0, ticks=60621/40677, in_queue=101298, util=89.17% 00:28:43.896 nvme0n2: ios=1419/1536, merge=0/0, ticks=29969/21250, in_queue=51219, util=87.98% 00:28:43.896 nvme0n3: ios=5120/5383, merge=0/0, ticks=51625/45229, in_queue=96854, util=89.40% 00:28:43.896 nvme0n4: ios=1361/1536, merge=0/0, ticks=29505/21547, in_queue=51052, util=88.32% 00:28:43.896 08:25:17 -- target/fio.sh@55 -- # sync 00:28:44.155 08:25:17 -- target/fio.sh@59 -- # fio_pid=63990 00:28:44.155 08:25:17 -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:28:44.155 08:25:17 -- target/fio.sh@61 -- # sleep 3 00:28:44.155 [global] 00:28:44.155 thread=1 00:28:44.155 invalidate=1 00:28:44.155 rw=read 00:28:44.155 time_based=1 00:28:44.155 runtime=10 00:28:44.155 ioengine=libaio 00:28:44.155 direct=1 00:28:44.155 bs=4096 00:28:44.155 iodepth=1 00:28:44.155 norandommap=1 00:28:44.155 numjobs=1 00:28:44.155 00:28:44.155 [job0] 00:28:44.155 filename=/dev/nvme0n1 00:28:44.155 [job1] 00:28:44.155 filename=/dev/nvme0n2 00:28:44.155 [job2] 00:28:44.155 filename=/dev/nvme0n3 00:28:44.155 [job3] 00:28:44.155 filename=/dev/nvme0n4 00:28:44.155 Could not set queue depth (nvme0n1) 00:28:44.155 Could not set queue depth (nvme0n2) 00:28:44.155 Could not set queue depth (nvme0n3) 00:28:44.155 Could not set queue depth (nvme0n4) 00:28:44.155 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:28:44.155 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:28:44.155 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:28:44.155 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:28:44.155 fio-3.35 00:28:44.155 Starting 4 threads 00:28:47.447 08:25:20 -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:28:47.447 fio: pid=64039, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:28:47.447 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=69902336, buflen=4096 00:28:47.447 08:25:20 -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:28:47.447 fio: pid=64038, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:28:47.447 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=75141120, buflen=4096 00:28:47.447 08:25:20 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:28:47.447 08:25:20 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:28:47.706 fio: pid=64036, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:28:47.706 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=14573568, buflen=4096 00:28:47.706 08:25:20 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:28:47.706 08:25:20 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:28:47.996 fio: pid=64037, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:28:47.996 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=19329024, buflen=4096 00:28:47.996 00:28:47.996 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=64036: Wed Apr 17 08:25:21 2024 00:28:47.996 read: IOPS=6025, BW=23.5MiB/s (24.7MB/s)(77.9MiB/3310msec) 00:28:47.996 slat (usec): min=4, max=21810, avg=10.43, stdev=209.27 00:28:47.996 clat (usec): min=103, max=2305, avg=154.76, stdev=32.02 00:28:47.996 lat (usec): min=111, max=21977, avg=165.19, stdev=212.31 00:28:47.996 clat percentiles (usec): 00:28:47.996 | 1.00th=[ 124], 5.00th=[ 131], 10.00th=[ 135], 20.00th=[ 141], 00:28:47.996 | 30.00th=[ 143], 40.00th=[ 147], 50.00th=[ 151], 60.00th=[ 153], 00:28:47.996 | 70.00th=[ 159], 80.00th=[ 165], 90.00th=[ 180], 95.00th=[ 198], 00:28:47.996 | 99.00th=[ 237], 99.50th=[ 251], 99.90th=[ 318], 99.95th=[ 396], 00:28:47.996 | 99.99th=[ 2114] 00:28:47.996 bw ( KiB/s): min=22096, max=25336, per=28.55%, avg=24702.67, stdev=1279.87, samples=6 00:28:47.996 iops : min= 5524, max= 6334, avg=6175.67, stdev=319.97, samples=6 00:28:47.996 lat (usec) : 250=99.47%, 500=0.48%, 750=0.01%, 1000=0.02% 00:28:47.996 lat (msec) : 2=0.01%, 4=0.01% 00:28:47.996 cpu : usr=0.54%, sys=4.59%, ctx=19952, majf=0, minf=1 00:28:47.996 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:47.996 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:47.996 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:47.996 issued rwts: total=19943,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:47.996 latency : target=0, window=0, percentile=100.00%, depth=1 00:28:47.996 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=64037: Wed Apr 17 08:25:21 2024 00:28:47.996 read: IOPS=5970, BW=23.3MiB/s (24.5MB/s)(82.4MiB/3535msec) 00:28:47.996 slat (usec): min=5, max=15567, avg=11.34, stdev=164.12 00:28:47.996 clat (usec): min=97, max=2028, avg=155.38, stdev=31.29 00:28:47.996 lat (usec): min=104, max=15793, avg=166.72, stdev=168.02 00:28:47.996 clat percentiles (usec): 00:28:47.996 | 1.00th=[ 112], 5.00th=[ 127], 10.00th=[ 135], 20.00th=[ 141], 00:28:47.996 | 30.00th=[ 145], 40.00th=[ 149], 50.00th=[ 151], 60.00th=[ 155], 00:28:47.996 | 70.00th=[ 161], 80.00th=[ 167], 90.00th=[ 180], 95.00th=[ 198], 00:28:47.996 | 99.00th=[ 235], 99.50th=[ 247], 99.90th=[ 306], 99.95th=[ 351], 00:28:47.996 | 99.99th=[ 1680] 00:28:47.996 bw ( KiB/s): min=21768, max=24736, per=27.91%, avg=24148.00, stdev=1169.64, samples=6 00:28:47.996 iops : min= 5442, max= 6184, avg=6037.00, stdev=292.41, samples=6 00:28:47.996 lat (usec) : 100=0.01%, 250=99.57%, 500=0.37%, 750=0.02%, 1000=0.01% 00:28:47.996 lat (msec) : 2=0.01%, 4=0.01% 00:28:47.996 cpu : usr=0.85%, sys=4.73%, ctx=21118, majf=0, minf=1 00:28:47.996 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:47.996 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:47.996 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:47.996 issued rwts: total=21104,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:47.996 latency : target=0, window=0, percentile=100.00%, depth=1 00:28:47.996 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=64038: Wed Apr 17 08:25:21 2024 00:28:47.996 read: IOPS=5916, BW=23.1MiB/s (24.2MB/s)(71.7MiB/3101msec) 00:28:47.996 slat (usec): min=6, max=7830, avg= 9.37, stdev=79.20 00:28:47.996 clat (usec): min=114, max=2258, avg=158.82, stdev=29.43 00:28:47.996 lat (usec): min=122, max=8006, avg=168.19, stdev=84.79 00:28:47.996 clat percentiles (usec): 00:28:47.996 | 1.00th=[ 133], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 147], 00:28:47.996 | 30.00th=[ 151], 40.00th=[ 153], 50.00th=[ 157], 60.00th=[ 159], 00:28:47.996 | 70.00th=[ 163], 80.00th=[ 169], 90.00th=[ 178], 95.00th=[ 186], 00:28:47.996 | 99.00th=[ 202], 99.50th=[ 210], 99.90th=[ 243], 99.95th=[ 334], 00:28:47.996 | 99.99th=[ 1745] 00:28:47.996 bw ( KiB/s): min=23000, max=24000, per=27.34%, avg=23656.00, stdev=408.80, samples=6 00:28:47.996 iops : min= 5750, max= 6000, avg=5914.00, stdev=102.20, samples=6 00:28:47.996 lat (usec) : 250=99.92%, 500=0.04%, 750=0.01% 00:28:47.996 lat (msec) : 2=0.02%, 4=0.01% 00:28:47.996 cpu : usr=0.65%, sys=5.03%, ctx=18354, majf=0, minf=1 00:28:47.996 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:47.996 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:47.996 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:47.996 issued rwts: total=18346,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:47.996 latency : target=0, window=0, percentile=100.00%, depth=1 00:28:47.996 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=64039: Wed Apr 17 08:25:21 2024 00:28:47.996 read: IOPS=5909, BW=23.1MiB/s (24.2MB/s)(66.7MiB/2888msec) 00:28:47.996 slat (nsec): min=6844, max=82405, avg=8387.76, stdev=3110.32 00:28:47.996 clat (usec): min=120, max=2079, avg=160.06, stdev=29.15 00:28:47.996 lat (usec): min=128, max=2087, avg=168.45, stdev=29.63 00:28:47.996 clat percentiles (usec): 00:28:47.996 | 1.00th=[ 133], 5.00th=[ 141], 10.00th=[ 145], 20.00th=[ 149], 00:28:47.996 | 30.00th=[ 151], 40.00th=[ 155], 50.00th=[ 157], 60.00th=[ 161], 00:28:47.996 | 70.00th=[ 165], 80.00th=[ 169], 90.00th=[ 180], 95.00th=[ 188], 00:28:47.996 | 99.00th=[ 206], 99.50th=[ 215], 99.90th=[ 255], 99.95th=[ 644], 00:28:47.996 | 99.99th=[ 1696] 00:28:47.996 bw ( KiB/s): min=22976, max=23944, per=27.37%, avg=23676.80, stdev=406.37, samples=5 00:28:47.996 iops : min= 5744, max= 5986, avg=5919.20, stdev=101.59, samples=5 00:28:47.996 lat (usec) : 250=99.88%, 500=0.05%, 750=0.03%, 1000=0.01% 00:28:47.996 lat (msec) : 2=0.02%, 4=0.01% 00:28:47.996 cpu : usr=0.94%, sys=4.64%, ctx=17067, majf=0, minf=2 00:28:47.996 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:47.996 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:47.996 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:47.996 issued rwts: total=17067,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:47.996 latency : target=0, window=0, percentile=100.00%, depth=1 00:28:47.996 00:28:47.996 Run status group 0 (all jobs): 00:28:47.996 READ: bw=84.5MiB/s (88.6MB/s), 23.1MiB/s-23.5MiB/s (24.2MB/s-24.7MB/s), io=299MiB (313MB), run=2888-3535msec 00:28:47.996 00:28:47.996 Disk stats (read/write): 00:28:47.996 nvme0n1: ios=18962/0, merge=0/0, ticks=2944/0, in_queue=2944, util=94.92% 00:28:47.996 nvme0n2: ios=19968/0, merge=0/0, ticks=3172/0, in_queue=3172, util=95.42% 00:28:47.996 nvme0n3: ios=17112/0, merge=0/0, ticks=2739/0, in_queue=2739, util=96.68% 00:28:47.996 nvme0n4: ios=17035/0, merge=0/0, ticks=2728/0, in_queue=2728, util=96.81% 00:28:47.996 08:25:21 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:28:47.996 08:25:21 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:28:48.255 08:25:21 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:28:48.255 08:25:21 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:28:48.514 08:25:21 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:28:48.514 08:25:21 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:28:48.514 08:25:21 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:28:48.514 08:25:21 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:28:48.773 08:25:22 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:28:48.773 08:25:22 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:28:49.032 08:25:22 -- target/fio.sh@69 -- # fio_status=0 00:28:49.032 08:25:22 -- target/fio.sh@70 -- # wait 63990 00:28:49.032 08:25:22 -- target/fio.sh@70 -- # fio_status=4 00:28:49.032 08:25:22 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:28:49.032 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:28:49.032 08:25:22 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:28:49.032 08:25:22 -- common/autotest_common.sh@1198 -- # local i=0 00:28:49.032 08:25:22 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:28:49.032 08:25:22 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:49.032 08:25:22 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:49.032 08:25:22 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:28:49.032 nvmf hotplug test: fio failed as expected 00:28:49.032 08:25:22 -- common/autotest_common.sh@1210 -- # return 0 00:28:49.032 08:25:22 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:28:49.032 08:25:22 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:28:49.032 08:25:22 -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:49.291 08:25:22 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:28:49.291 08:25:22 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:28:49.291 08:25:22 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:28:49.291 08:25:22 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:28:49.291 08:25:22 -- target/fio.sh@91 -- # nvmftestfini 00:28:49.291 08:25:22 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:49.291 08:25:22 -- nvmf/common.sh@116 -- # sync 00:28:49.291 08:25:22 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:28:49.291 08:25:22 -- nvmf/common.sh@119 -- # set +e 00:28:49.291 08:25:22 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:49.291 08:25:22 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:28:49.291 rmmod nvme_tcp 00:28:49.291 rmmod nvme_fabrics 00:28:49.291 rmmod nvme_keyring 00:28:49.291 08:25:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:49.291 08:25:22 -- nvmf/common.sh@123 -- # set -e 00:28:49.291 08:25:22 -- nvmf/common.sh@124 -- # return 0 00:28:49.292 08:25:22 -- nvmf/common.sh@477 -- # '[' -n 63619 ']' 00:28:49.292 08:25:22 -- nvmf/common.sh@478 -- # killprocess 63619 00:28:49.292 08:25:22 -- common/autotest_common.sh@926 -- # '[' -z 63619 ']' 00:28:49.292 08:25:22 -- common/autotest_common.sh@930 -- # kill -0 63619 00:28:49.292 08:25:22 -- common/autotest_common.sh@931 -- # uname 00:28:49.292 08:25:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:49.292 08:25:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 63619 00:28:49.551 08:25:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:49.551 08:25:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:49.551 killing process with pid 63619 00:28:49.551 08:25:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 63619' 00:28:49.551 08:25:22 -- common/autotest_common.sh@945 -- # kill 63619 00:28:49.551 08:25:22 -- common/autotest_common.sh@950 -- # wait 63619 00:28:49.551 08:25:22 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:49.551 08:25:22 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:49.551 08:25:22 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:49.551 08:25:22 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:49.551 08:25:22 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:49.551 08:25:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:49.551 08:25:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:49.551 08:25:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:49.810 08:25:22 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:28:49.810 00:28:49.810 real 0m18.367s 00:28:49.810 user 1m9.705s 00:28:49.810 sys 0m8.905s 00:28:49.810 08:25:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:49.810 08:25:22 -- common/autotest_common.sh@10 -- # set +x 00:28:49.810 ************************************ 00:28:49.810 END TEST nvmf_fio_target 00:28:49.810 ************************************ 00:28:49.810 08:25:22 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:28:49.810 08:25:22 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:49.810 08:25:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:49.810 08:25:22 -- common/autotest_common.sh@10 -- # set +x 00:28:49.810 ************************************ 00:28:49.810 START TEST nvmf_bdevio 00:28:49.810 ************************************ 00:28:49.810 08:25:22 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:28:49.810 * Looking for test storage... 00:28:49.810 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:28:49.810 08:25:23 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:49.810 08:25:23 -- nvmf/common.sh@7 -- # uname -s 00:28:49.810 08:25:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:49.810 08:25:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:49.810 08:25:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:49.810 08:25:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:49.810 08:25:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:49.810 08:25:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:49.810 08:25:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:49.810 08:25:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:49.810 08:25:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:49.810 08:25:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:49.810 08:25:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ce38300-f67f-48af-81f9-d51a7c54746d 00:28:49.810 08:25:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ce38300-f67f-48af-81f9-d51a7c54746d 00:28:49.810 08:25:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:49.810 08:25:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:49.810 08:25:23 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:49.810 08:25:23 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:49.810 08:25:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:49.810 08:25:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:49.810 08:25:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:49.810 08:25:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.810 08:25:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.810 08:25:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.810 08:25:23 -- paths/export.sh@5 -- # export PATH 00:28:49.810 08:25:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.810 08:25:23 -- nvmf/common.sh@46 -- # : 0 00:28:49.810 08:25:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:49.810 08:25:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:49.810 08:25:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:49.810 08:25:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:49.810 08:25:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:49.810 08:25:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:49.810 08:25:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:49.810 08:25:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:50.069 08:25:23 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:50.069 08:25:23 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:50.069 08:25:23 -- target/bdevio.sh@14 -- # nvmftestinit 00:28:50.069 08:25:23 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:50.069 08:25:23 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:50.069 08:25:23 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:50.069 08:25:23 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:50.069 08:25:23 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:50.069 08:25:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:50.069 08:25:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:50.069 08:25:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:50.069 08:25:23 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:28:50.069 08:25:23 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:28:50.069 08:25:23 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:28:50.069 08:25:23 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:28:50.069 08:25:23 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:28:50.069 08:25:23 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:28:50.069 08:25:23 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:50.069 08:25:23 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:50.069 08:25:23 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:28:50.069 08:25:23 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:28:50.069 08:25:23 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:50.069 08:25:23 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:50.069 08:25:23 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:50.069 08:25:23 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:50.069 08:25:23 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:50.069 08:25:23 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:50.069 08:25:23 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:50.069 08:25:23 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:50.070 08:25:23 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:28:50.070 08:25:23 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:28:50.070 Cannot find device "nvmf_tgt_br" 00:28:50.070 08:25:23 -- nvmf/common.sh@154 -- # true 00:28:50.070 08:25:23 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:28:50.070 Cannot find device "nvmf_tgt_br2" 00:28:50.070 08:25:23 -- nvmf/common.sh@155 -- # true 00:28:50.070 08:25:23 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:28:50.070 08:25:23 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:28:50.070 Cannot find device "nvmf_tgt_br" 00:28:50.070 08:25:23 -- nvmf/common.sh@157 -- # true 00:28:50.070 08:25:23 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:28:50.070 Cannot find device "nvmf_tgt_br2" 00:28:50.070 08:25:23 -- nvmf/common.sh@158 -- # true 00:28:50.070 08:25:23 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:28:50.070 08:25:23 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:28:50.070 08:25:23 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:50.070 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:50.070 08:25:23 -- nvmf/common.sh@161 -- # true 00:28:50.070 08:25:23 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:50.070 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:50.070 08:25:23 -- nvmf/common.sh@162 -- # true 00:28:50.070 08:25:23 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:28:50.070 08:25:23 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:50.070 08:25:23 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:50.070 08:25:23 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:50.070 08:25:23 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:50.070 08:25:23 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:50.070 08:25:23 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:50.070 08:25:23 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:28:50.070 08:25:23 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:28:50.070 08:25:23 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:28:50.070 08:25:23 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:28:50.328 08:25:23 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:28:50.328 08:25:23 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:28:50.328 08:25:23 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:50.328 08:25:23 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:50.328 08:25:23 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:50.328 08:25:23 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:28:50.328 08:25:23 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:28:50.328 08:25:23 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:28:50.328 08:25:23 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:50.328 08:25:23 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:50.328 08:25:23 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:50.328 08:25:23 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:50.328 08:25:23 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:28:50.328 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:50.328 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.104 ms 00:28:50.328 00:28:50.328 --- 10.0.0.2 ping statistics --- 00:28:50.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:50.328 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:28:50.328 08:25:23 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:28:50.328 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:50.328 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.097 ms 00:28:50.328 00:28:50.328 --- 10.0.0.3 ping statistics --- 00:28:50.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:50.328 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:28:50.328 08:25:23 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:50.328 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:50.328 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:28:50.328 00:28:50.328 --- 10.0.0.1 ping statistics --- 00:28:50.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:50.328 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:28:50.328 08:25:23 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:50.328 08:25:23 -- nvmf/common.sh@421 -- # return 0 00:28:50.328 08:25:23 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:50.328 08:25:23 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:50.328 08:25:23 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:50.328 08:25:23 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:50.328 08:25:23 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:50.328 08:25:23 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:50.328 08:25:23 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:50.328 08:25:23 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:28:50.328 08:25:23 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:50.328 08:25:23 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:50.328 08:25:23 -- common/autotest_common.sh@10 -- # set +x 00:28:50.328 08:25:23 -- nvmf/common.sh@469 -- # nvmfpid=64297 00:28:50.328 08:25:23 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:28:50.328 08:25:23 -- nvmf/common.sh@470 -- # waitforlisten 64297 00:28:50.328 08:25:23 -- common/autotest_common.sh@819 -- # '[' -z 64297 ']' 00:28:50.328 08:25:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:50.328 08:25:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:50.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:50.328 08:25:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:50.328 08:25:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:50.328 08:25:23 -- common/autotest_common.sh@10 -- # set +x 00:28:50.328 [2024-04-17 08:25:23.589709] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:50.328 [2024-04-17 08:25:23.589782] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:50.587 [2024-04-17 08:25:23.729775] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:50.587 [2024-04-17 08:25:23.831042] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:50.587 [2024-04-17 08:25:23.831195] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:50.587 [2024-04-17 08:25:23.831203] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:50.587 [2024-04-17 08:25:23.831209] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:50.587 [2024-04-17 08:25:23.831447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:28:50.587 [2024-04-17 08:25:23.831617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:50.587 [2024-04-17 08:25:23.831503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:28:50.587 [2024-04-17 08:25:23.831624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:28:51.153 08:25:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:51.153 08:25:24 -- common/autotest_common.sh@852 -- # return 0 00:28:51.153 08:25:24 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:51.153 08:25:24 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:51.153 08:25:24 -- common/autotest_common.sh@10 -- # set +x 00:28:51.153 08:25:24 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:51.153 08:25:24 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:51.153 08:25:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:51.153 08:25:24 -- common/autotest_common.sh@10 -- # set +x 00:28:51.411 [2024-04-17 08:25:24.491706] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:51.411 08:25:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:51.412 08:25:24 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:51.412 08:25:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:51.412 08:25:24 -- common/autotest_common.sh@10 -- # set +x 00:28:51.412 Malloc0 00:28:51.412 08:25:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:51.412 08:25:24 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:51.412 08:25:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:51.412 08:25:24 -- common/autotest_common.sh@10 -- # set +x 00:28:51.412 08:25:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:51.412 08:25:24 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:51.412 08:25:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:51.412 08:25:24 -- common/autotest_common.sh@10 -- # set +x 00:28:51.412 08:25:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:51.412 08:25:24 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:51.412 08:25:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:51.412 08:25:24 -- common/autotest_common.sh@10 -- # set +x 00:28:51.412 [2024-04-17 08:25:24.563153] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:51.412 08:25:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:51.412 08:25:24 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:28:51.412 08:25:24 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:28:51.412 08:25:24 -- nvmf/common.sh@520 -- # config=() 00:28:51.412 08:25:24 -- nvmf/common.sh@520 -- # local subsystem config 00:28:51.412 08:25:24 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:28:51.412 08:25:24 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:28:51.412 { 00:28:51.412 "params": { 00:28:51.412 "name": "Nvme$subsystem", 00:28:51.412 "trtype": "$TEST_TRANSPORT", 00:28:51.412 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:51.412 "adrfam": "ipv4", 00:28:51.412 "trsvcid": "$NVMF_PORT", 00:28:51.412 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:51.412 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:51.412 "hdgst": ${hdgst:-false}, 00:28:51.412 "ddgst": ${ddgst:-false} 00:28:51.412 }, 00:28:51.412 "method": "bdev_nvme_attach_controller" 00:28:51.412 } 00:28:51.412 EOF 00:28:51.412 )") 00:28:51.412 08:25:24 -- nvmf/common.sh@542 -- # cat 00:28:51.412 08:25:24 -- nvmf/common.sh@544 -- # jq . 00:28:51.412 08:25:24 -- nvmf/common.sh@545 -- # IFS=, 00:28:51.412 08:25:24 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:28:51.412 "params": { 00:28:51.412 "name": "Nvme1", 00:28:51.412 "trtype": "tcp", 00:28:51.412 "traddr": "10.0.0.2", 00:28:51.412 "adrfam": "ipv4", 00:28:51.412 "trsvcid": "4420", 00:28:51.412 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:51.412 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:51.412 "hdgst": false, 00:28:51.412 "ddgst": false 00:28:51.412 }, 00:28:51.412 "method": "bdev_nvme_attach_controller" 00:28:51.412 }' 00:28:51.412 [2024-04-17 08:25:24.617705] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:51.412 [2024-04-17 08:25:24.617794] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64333 ] 00:28:51.671 [2024-04-17 08:25:24.761213] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:51.671 [2024-04-17 08:25:24.866094] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:51.671 [2024-04-17 08:25:24.866726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:51.671 [2024-04-17 08:25:24.866728] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:51.930 [2024-04-17 08:25:25.019764] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:28:51.930 [2024-04-17 08:25:25.019800] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:28:51.930 I/O targets: 00:28:51.930 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:28:51.930 00:28:51.930 00:28:51.930 CUnit - A unit testing framework for C - Version 2.1-3 00:28:51.930 http://cunit.sourceforge.net/ 00:28:51.930 00:28:51.930 00:28:51.930 Suite: bdevio tests on: Nvme1n1 00:28:51.930 Test: blockdev write read block ...passed 00:28:51.930 Test: blockdev write zeroes read block ...passed 00:28:51.930 Test: blockdev write zeroes read no split ...passed 00:28:51.930 Test: blockdev write zeroes read split ...passed 00:28:51.930 Test: blockdev write zeroes read split partial ...passed 00:28:51.931 Test: blockdev reset ...[2024-04-17 08:25:25.051541] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.931 [2024-04-17 08:25:25.051631] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23f5bb0 (9): Bad file descriptor 00:28:51.931 [2024-04-17 08:25:25.070459] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:51.931 passed 00:28:51.931 Test: blockdev write read 8 blocks ...passed 00:28:51.931 Test: blockdev write read size > 128k ...passed 00:28:51.931 Test: blockdev write read invalid size ...passed 00:28:51.931 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:28:51.931 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:28:51.931 Test: blockdev write read max offset ...passed 00:28:51.931 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:28:51.931 Test: blockdev writev readv 8 blocks ...passed 00:28:51.931 Test: blockdev writev readv 30 x 1block ...passed 00:28:51.931 Test: blockdev writev readv block ...passed 00:28:51.931 Test: blockdev writev readv size > 128k ...passed 00:28:51.931 Test: blockdev writev readv size > 128k in two iovs ...passed 00:28:51.931 Test: blockdev comparev and writev ...[2024-04-17 08:25:25.077073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:28:51.931 [2024-04-17 08:25:25.077212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:51.931 [2024-04-17 08:25:25.077287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:28:51.931 [2024-04-17 08:25:25.077363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:51.931 [2024-04-17 08:25:25.077717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:28:51.931 [2024-04-17 08:25:25.077789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:51.931 [2024-04-17 08:25:25.077868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:28:51.931 [2024-04-17 08:25:25.077929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:51.931 [2024-04-17 08:25:25.078239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:28:51.931 [2024-04-17 08:25:25.078329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:51.931 [2024-04-17 08:25:25.078395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:28:51.931 [2024-04-17 08:25:25.078450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:51.931 [2024-04-17 08:25:25.078767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:28:51.931 [2024-04-17 08:25:25.078830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:51.931 [2024-04-17 08:25:25.078909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:28:51.931 [2024-04-17 08:25:25.078952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:51.931 passed 00:28:51.931 Test: blockdev nvme passthru rw ...passed 00:28:51.931 Test: blockdev nvme passthru vendor specific ...[2024-04-17 08:25:25.079889] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:51.931 [2024-04-17 08:25:25.079988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:51.931 [2024-04-17 08:25:25.080155] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:51.931 [2024-04-17 08:25:25.080209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:51.931 [2024-04-17 08:25:25.080377] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:51.931 [2024-04-17 08:25:25.080428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:51.931 [2024-04-17 08:25:25.080591] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:51.931 [2024-04-17 08:25:25.080649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:51.931 passed 00:28:51.931 Test: blockdev nvme admin passthru ...passed 00:28:51.931 Test: blockdev copy ...passed 00:28:51.931 00:28:51.931 Run Summary: Type Total Ran Passed Failed Inactive 00:28:51.931 suites 1 1 n/a 0 0 00:28:51.931 tests 23 23 23 0 0 00:28:51.931 asserts 152 152 152 0 n/a 00:28:51.931 00:28:51.931 Elapsed time = 0.146 seconds 00:28:52.191 08:25:25 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:52.191 08:25:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:52.191 08:25:25 -- common/autotest_common.sh@10 -- # set +x 00:28:52.191 08:25:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:52.191 08:25:25 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:28:52.191 08:25:25 -- target/bdevio.sh@30 -- # nvmftestfini 00:28:52.191 08:25:25 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:52.191 08:25:25 -- nvmf/common.sh@116 -- # sync 00:28:52.191 08:25:25 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:28:52.191 08:25:25 -- nvmf/common.sh@119 -- # set +e 00:28:52.191 08:25:25 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:52.191 08:25:25 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:28:52.191 rmmod nvme_tcp 00:28:52.191 rmmod nvme_fabrics 00:28:52.191 rmmod nvme_keyring 00:28:52.191 08:25:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:52.191 08:25:25 -- nvmf/common.sh@123 -- # set -e 00:28:52.191 08:25:25 -- nvmf/common.sh@124 -- # return 0 00:28:52.191 08:25:25 -- nvmf/common.sh@477 -- # '[' -n 64297 ']' 00:28:52.191 08:25:25 -- nvmf/common.sh@478 -- # killprocess 64297 00:28:52.191 08:25:25 -- common/autotest_common.sh@926 -- # '[' -z 64297 ']' 00:28:52.191 08:25:25 -- common/autotest_common.sh@930 -- # kill -0 64297 00:28:52.191 08:25:25 -- common/autotest_common.sh@931 -- # uname 00:28:52.191 08:25:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:52.191 08:25:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 64297 00:28:52.191 08:25:25 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:28:52.191 08:25:25 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:28:52.191 08:25:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 64297' 00:28:52.191 killing process with pid 64297 00:28:52.191 08:25:25 -- common/autotest_common.sh@945 -- # kill 64297 00:28:52.191 08:25:25 -- common/autotest_common.sh@950 -- # wait 64297 00:28:52.450 08:25:25 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:52.450 08:25:25 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:52.450 08:25:25 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:52.450 08:25:25 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:52.450 08:25:25 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:52.450 08:25:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:52.450 08:25:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:52.450 08:25:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:52.450 08:25:25 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:28:52.450 00:28:52.450 real 0m2.756s 00:28:52.450 user 0m8.650s 00:28:52.450 sys 0m0.758s 00:28:52.450 ************************************ 00:28:52.450 END TEST nvmf_bdevio 00:28:52.450 ************************************ 00:28:52.450 08:25:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:52.450 08:25:25 -- common/autotest_common.sh@10 -- # set +x 00:28:52.712 08:25:25 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:28:52.712 08:25:25 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:28:52.712 08:25:25 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:28:52.712 08:25:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:52.712 08:25:25 -- common/autotest_common.sh@10 -- # set +x 00:28:52.712 ************************************ 00:28:52.712 START TEST nvmf_bdevio_no_huge 00:28:52.712 ************************************ 00:28:52.712 08:25:25 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:28:52.712 * Looking for test storage... 00:28:52.712 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:28:52.712 08:25:25 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:52.712 08:25:25 -- nvmf/common.sh@7 -- # uname -s 00:28:52.712 08:25:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:52.712 08:25:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:52.712 08:25:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:52.712 08:25:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:52.712 08:25:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:52.712 08:25:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:52.712 08:25:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:52.712 08:25:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:52.712 08:25:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:52.712 08:25:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:52.712 08:25:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ce38300-f67f-48af-81f9-d51a7c54746d 00:28:52.712 08:25:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ce38300-f67f-48af-81f9-d51a7c54746d 00:28:52.712 08:25:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:52.712 08:25:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:52.712 08:25:25 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:52.712 08:25:25 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:52.712 08:25:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:52.712 08:25:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:52.712 08:25:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:52.712 08:25:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:52.712 08:25:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:52.712 08:25:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:52.712 08:25:25 -- paths/export.sh@5 -- # export PATH 00:28:52.712 08:25:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:52.712 08:25:25 -- nvmf/common.sh@46 -- # : 0 00:28:52.712 08:25:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:52.712 08:25:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:52.712 08:25:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:52.712 08:25:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:52.712 08:25:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:52.712 08:25:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:52.712 08:25:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:52.712 08:25:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:52.712 08:25:25 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:52.712 08:25:25 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:52.712 08:25:25 -- target/bdevio.sh@14 -- # nvmftestinit 00:28:52.712 08:25:25 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:52.712 08:25:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:52.712 08:25:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:52.712 08:25:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:52.712 08:25:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:52.712 08:25:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:52.712 08:25:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:52.712 08:25:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:52.712 08:25:25 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:28:52.712 08:25:25 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:28:52.712 08:25:25 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:28:52.712 08:25:25 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:28:52.712 08:25:25 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:28:52.712 08:25:25 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:28:52.713 08:25:25 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:52.713 08:25:25 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:52.713 08:25:25 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:28:52.713 08:25:25 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:28:52.713 08:25:25 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:52.713 08:25:25 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:52.713 08:25:25 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:52.713 08:25:25 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:52.713 08:25:25 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:52.713 08:25:25 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:52.713 08:25:25 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:52.713 08:25:25 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:52.713 08:25:25 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:28:52.713 08:25:25 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:28:52.713 Cannot find device "nvmf_tgt_br" 00:28:52.713 08:25:26 -- nvmf/common.sh@154 -- # true 00:28:52.713 08:25:26 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:28:52.713 Cannot find device "nvmf_tgt_br2" 00:28:52.713 08:25:26 -- nvmf/common.sh@155 -- # true 00:28:52.713 08:25:26 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:28:52.713 08:25:26 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:28:52.713 Cannot find device "nvmf_tgt_br" 00:28:52.713 08:25:26 -- nvmf/common.sh@157 -- # true 00:28:52.713 08:25:26 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:28:52.972 Cannot find device "nvmf_tgt_br2" 00:28:52.972 08:25:26 -- nvmf/common.sh@158 -- # true 00:28:52.972 08:25:26 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:28:52.972 08:25:26 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:28:52.972 08:25:26 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:52.972 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:52.972 08:25:26 -- nvmf/common.sh@161 -- # true 00:28:52.972 08:25:26 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:52.972 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:52.972 08:25:26 -- nvmf/common.sh@162 -- # true 00:28:52.972 08:25:26 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:28:52.972 08:25:26 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:52.972 08:25:26 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:52.972 08:25:26 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:52.972 08:25:26 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:52.972 08:25:26 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:52.972 08:25:26 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:52.972 08:25:26 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:28:52.972 08:25:26 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:28:52.972 08:25:26 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:28:52.972 08:25:26 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:28:52.972 08:25:26 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:28:52.972 08:25:26 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:28:52.972 08:25:26 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:52.972 08:25:26 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:52.972 08:25:26 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:52.972 08:25:26 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:28:52.972 08:25:26 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:28:52.972 08:25:26 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:28:52.972 08:25:26 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:52.972 08:25:26 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:52.972 08:25:26 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:52.972 08:25:26 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:52.972 08:25:26 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:28:52.972 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:52.973 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:28:52.973 00:28:52.973 --- 10.0.0.2 ping statistics --- 00:28:52.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:52.973 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:28:52.973 08:25:26 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:28:52.973 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:52.973 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:28:52.973 00:28:52.973 --- 10.0.0.3 ping statistics --- 00:28:52.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:52.973 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:28:52.973 08:25:26 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:52.973 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:52.973 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:28:52.973 00:28:52.973 --- 10.0.0.1 ping statistics --- 00:28:52.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:52.973 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:28:52.973 08:25:26 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:52.973 08:25:26 -- nvmf/common.sh@421 -- # return 0 00:28:52.973 08:25:26 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:52.973 08:25:26 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:52.973 08:25:26 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:52.973 08:25:26 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:52.973 08:25:26 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:52.973 08:25:26 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:52.973 08:25:26 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:53.231 08:25:26 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:28:53.231 08:25:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:53.231 08:25:26 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:53.231 08:25:26 -- common/autotest_common.sh@10 -- # set +x 00:28:53.231 08:25:26 -- nvmf/common.sh@469 -- # nvmfpid=64506 00:28:53.231 08:25:26 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:28:53.231 08:25:26 -- nvmf/common.sh@470 -- # waitforlisten 64506 00:28:53.232 08:25:26 -- common/autotest_common.sh@819 -- # '[' -z 64506 ']' 00:28:53.232 08:25:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:53.232 08:25:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:53.232 08:25:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:53.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:53.232 08:25:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:53.232 08:25:26 -- common/autotest_common.sh@10 -- # set +x 00:28:53.232 [2024-04-17 08:25:26.398252] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:53.232 [2024-04-17 08:25:26.398375] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:28:53.232 [2024-04-17 08:25:26.556810] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:53.490 [2024-04-17 08:25:26.660886] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:53.490 [2024-04-17 08:25:26.661011] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:53.490 [2024-04-17 08:25:26.661018] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:53.490 [2024-04-17 08:25:26.661024] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:53.491 [2024-04-17 08:25:26.661135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:28:53.491 [2024-04-17 08:25:26.661378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:28:53.491 [2024-04-17 08:25:26.661590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:53.491 [2024-04-17 08:25:26.661593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:28:54.059 08:25:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:54.059 08:25:27 -- common/autotest_common.sh@852 -- # return 0 00:28:54.059 08:25:27 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:54.059 08:25:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:54.059 08:25:27 -- common/autotest_common.sh@10 -- # set +x 00:28:54.059 08:25:27 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:54.059 08:25:27 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:54.059 08:25:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:54.059 08:25:27 -- common/autotest_common.sh@10 -- # set +x 00:28:54.060 [2024-04-17 08:25:27.299339] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:54.060 08:25:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:54.060 08:25:27 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:54.060 08:25:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:54.060 08:25:27 -- common/autotest_common.sh@10 -- # set +x 00:28:54.060 Malloc0 00:28:54.060 08:25:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:54.060 08:25:27 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:54.060 08:25:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:54.060 08:25:27 -- common/autotest_common.sh@10 -- # set +x 00:28:54.060 08:25:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:54.060 08:25:27 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:54.060 08:25:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:54.060 08:25:27 -- common/autotest_common.sh@10 -- # set +x 00:28:54.060 08:25:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:54.060 08:25:27 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:54.060 08:25:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:54.060 08:25:27 -- common/autotest_common.sh@10 -- # set +x 00:28:54.060 [2024-04-17 08:25:27.349025] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:54.060 08:25:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:54.060 08:25:27 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:28:54.060 08:25:27 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:28:54.060 08:25:27 -- nvmf/common.sh@520 -- # config=() 00:28:54.060 08:25:27 -- nvmf/common.sh@520 -- # local subsystem config 00:28:54.060 08:25:27 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:28:54.060 08:25:27 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:28:54.060 { 00:28:54.060 "params": { 00:28:54.060 "name": "Nvme$subsystem", 00:28:54.060 "trtype": "$TEST_TRANSPORT", 00:28:54.060 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:54.060 "adrfam": "ipv4", 00:28:54.060 "trsvcid": "$NVMF_PORT", 00:28:54.060 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:54.060 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:54.060 "hdgst": ${hdgst:-false}, 00:28:54.060 "ddgst": ${ddgst:-false} 00:28:54.060 }, 00:28:54.060 "method": "bdev_nvme_attach_controller" 00:28:54.060 } 00:28:54.060 EOF 00:28:54.060 )") 00:28:54.060 08:25:27 -- nvmf/common.sh@542 -- # cat 00:28:54.060 08:25:27 -- nvmf/common.sh@544 -- # jq . 00:28:54.060 08:25:27 -- nvmf/common.sh@545 -- # IFS=, 00:28:54.060 08:25:27 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:28:54.060 "params": { 00:28:54.060 "name": "Nvme1", 00:28:54.060 "trtype": "tcp", 00:28:54.060 "traddr": "10.0.0.2", 00:28:54.060 "adrfam": "ipv4", 00:28:54.060 "trsvcid": "4420", 00:28:54.060 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:54.060 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:54.060 "hdgst": false, 00:28:54.060 "ddgst": false 00:28:54.060 }, 00:28:54.060 "method": "bdev_nvme_attach_controller" 00:28:54.060 }' 00:28:54.320 [2024-04-17 08:25:27.403299] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:54.320 [2024-04-17 08:25:27.403362] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid64542 ] 00:28:54.320 [2024-04-17 08:25:27.532853] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:54.580 [2024-04-17 08:25:27.676995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:54.580 [2024-04-17 08:25:27.677184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:54.580 [2024-04-17 08:25:27.677185] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:54.580 [2024-04-17 08:25:27.838867] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:28:54.580 [2024-04-17 08:25:27.838921] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:28:54.580 I/O targets: 00:28:54.580 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:28:54.580 00:28:54.580 00:28:54.580 CUnit - A unit testing framework for C - Version 2.1-3 00:28:54.580 http://cunit.sourceforge.net/ 00:28:54.580 00:28:54.580 00:28:54.580 Suite: bdevio tests on: Nvme1n1 00:28:54.580 Test: blockdev write read block ...passed 00:28:54.580 Test: blockdev write zeroes read block ...passed 00:28:54.580 Test: blockdev write zeroes read no split ...passed 00:28:54.580 Test: blockdev write zeroes read split ...passed 00:28:54.580 Test: blockdev write zeroes read split partial ...passed 00:28:54.580 Test: blockdev reset ...[2024-04-17 08:25:27.878567] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.580 [2024-04-17 08:25:27.878677] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11c0eb0 (9): Bad file descriptor 00:28:54.580 [2024-04-17 08:25:27.898889] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:54.580 passed 00:28:54.580 Test: blockdev write read 8 blocks ...passed 00:28:54.580 Test: blockdev write read size > 128k ...passed 00:28:54.580 Test: blockdev write read invalid size ...passed 00:28:54.580 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:28:54.580 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:28:54.580 Test: blockdev write read max offset ...passed 00:28:54.580 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:28:54.580 Test: blockdev writev readv 8 blocks ...passed 00:28:54.580 Test: blockdev writev readv 30 x 1block ...passed 00:28:54.580 Test: blockdev writev readv block ...passed 00:28:54.580 Test: blockdev writev readv size > 128k ...passed 00:28:54.580 Test: blockdev writev readv size > 128k in two iovs ...passed 00:28:54.580 Test: blockdev comparev and writev ...[2024-04-17 08:25:27.905748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:28:54.580 [2024-04-17 08:25:27.905790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.580 [2024-04-17 08:25:27.905806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:28:54.580 [2024-04-17 08:25:27.905814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:54.580 [2024-04-17 08:25:27.906073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:28:54.580 [2024-04-17 08:25:27.906091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:54.580 [2024-04-17 08:25:27.906104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:28:54.580 [2024-04-17 08:25:27.906111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:54.580 [2024-04-17 08:25:27.906371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:28:54.580 [2024-04-17 08:25:27.906389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:54.580 [2024-04-17 08:25:27.906401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:28:54.580 [2024-04-17 08:25:27.906408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:54.580 [2024-04-17 08:25:27.906651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:28:54.580 [2024-04-17 08:25:27.906668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:54.580 [2024-04-17 08:25:27.906680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:28:54.580 [2024-04-17 08:25:27.906687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:54.580 passed 00:28:54.580 Test: blockdev nvme passthru rw ...passed 00:28:54.580 Test: blockdev nvme passthru vendor specific ...[2024-04-17 08:25:27.907327] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:54.580 [2024-04-17 08:25:27.907352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:54.580 [2024-04-17 08:25:27.907449] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:54.580 [2024-04-17 08:25:27.907466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:54.580 [2024-04-17 08:25:27.907547] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:54.580 [2024-04-17 08:25:27.907567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:54.580 [2024-04-17 08:25:27.907657] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:54.580 [2024-04-17 08:25:27.907682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:54.580 passed 00:28:54.840 Test: blockdev nvme admin passthru ...passed 00:28:54.840 Test: blockdev copy ...passed 00:28:54.840 00:28:54.840 Run Summary: Type Total Ran Passed Failed Inactive 00:28:54.840 suites 1 1 n/a 0 0 00:28:54.840 tests 23 23 23 0 0 00:28:54.840 asserts 152 152 152 0 n/a 00:28:54.840 00:28:54.840 Elapsed time = 0.165 seconds 00:28:55.100 08:25:28 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:55.100 08:25:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:55.100 08:25:28 -- common/autotest_common.sh@10 -- # set +x 00:28:55.100 08:25:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:55.100 08:25:28 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:28:55.100 08:25:28 -- target/bdevio.sh@30 -- # nvmftestfini 00:28:55.100 08:25:28 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:55.100 08:25:28 -- nvmf/common.sh@116 -- # sync 00:28:55.100 08:25:28 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:28:55.100 08:25:28 -- nvmf/common.sh@119 -- # set +e 00:28:55.100 08:25:28 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:55.100 08:25:28 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:28:55.100 rmmod nvme_tcp 00:28:55.100 rmmod nvme_fabrics 00:28:55.100 rmmod nvme_keyring 00:28:55.100 08:25:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:55.100 08:25:28 -- nvmf/common.sh@123 -- # set -e 00:28:55.100 08:25:28 -- nvmf/common.sh@124 -- # return 0 00:28:55.100 08:25:28 -- nvmf/common.sh@477 -- # '[' -n 64506 ']' 00:28:55.100 08:25:28 -- nvmf/common.sh@478 -- # killprocess 64506 00:28:55.100 08:25:28 -- common/autotest_common.sh@926 -- # '[' -z 64506 ']' 00:28:55.100 08:25:28 -- common/autotest_common.sh@930 -- # kill -0 64506 00:28:55.100 08:25:28 -- common/autotest_common.sh@931 -- # uname 00:28:55.100 08:25:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:55.100 08:25:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 64506 00:28:55.100 08:25:28 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:28:55.100 killing process with pid 64506 00:28:55.100 08:25:28 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:28:55.100 08:25:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 64506' 00:28:55.100 08:25:28 -- common/autotest_common.sh@945 -- # kill 64506 00:28:55.100 08:25:28 -- common/autotest_common.sh@950 -- # wait 64506 00:28:55.670 08:25:28 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:55.670 08:25:28 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:55.670 08:25:28 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:55.670 08:25:28 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:55.670 08:25:28 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:55.670 08:25:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:55.670 08:25:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:55.670 08:25:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:55.670 08:25:28 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:28:55.670 00:28:55.670 real 0m3.028s 00:28:55.670 user 0m9.553s 00:28:55.670 sys 0m1.208s 00:28:55.670 08:25:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:55.670 08:25:28 -- common/autotest_common.sh@10 -- # set +x 00:28:55.670 ************************************ 00:28:55.670 END TEST nvmf_bdevio_no_huge 00:28:55.670 ************************************ 00:28:55.670 08:25:28 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:28:55.670 08:25:28 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:55.670 08:25:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:55.670 08:25:28 -- common/autotest_common.sh@10 -- # set +x 00:28:55.670 ************************************ 00:28:55.670 START TEST nvmf_tls 00:28:55.670 ************************************ 00:28:55.670 08:25:28 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:28:55.929 * Looking for test storage... 00:28:55.929 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:28:55.929 08:25:29 -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:55.929 08:25:29 -- nvmf/common.sh@7 -- # uname -s 00:28:55.929 08:25:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:55.929 08:25:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:55.929 08:25:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:55.929 08:25:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:55.929 08:25:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:55.929 08:25:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:55.929 08:25:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:55.929 08:25:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:55.929 08:25:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:55.929 08:25:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:55.929 08:25:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ce38300-f67f-48af-81f9-d51a7c54746d 00:28:55.929 08:25:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ce38300-f67f-48af-81f9-d51a7c54746d 00:28:55.929 08:25:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:55.929 08:25:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:55.929 08:25:29 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:55.929 08:25:29 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:55.929 08:25:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:55.929 08:25:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:55.929 08:25:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:55.929 08:25:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:55.929 08:25:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:55.929 08:25:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:55.929 08:25:29 -- paths/export.sh@5 -- # export PATH 00:28:55.929 08:25:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:55.929 08:25:29 -- nvmf/common.sh@46 -- # : 0 00:28:55.929 08:25:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:55.929 08:25:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:55.929 08:25:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:55.929 08:25:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:55.929 08:25:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:55.929 08:25:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:55.929 08:25:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:55.929 08:25:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:55.929 08:25:29 -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:55.929 08:25:29 -- target/tls.sh@71 -- # nvmftestinit 00:28:55.929 08:25:29 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:55.929 08:25:29 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:55.929 08:25:29 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:55.929 08:25:29 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:55.929 08:25:29 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:55.929 08:25:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:55.929 08:25:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:55.929 08:25:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:55.929 08:25:29 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:28:55.929 08:25:29 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:28:55.929 08:25:29 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:28:55.929 08:25:29 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:28:55.929 08:25:29 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:28:55.929 08:25:29 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:28:55.929 08:25:29 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:55.929 08:25:29 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:55.929 08:25:29 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:28:55.929 08:25:29 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:28:55.929 08:25:29 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:55.929 08:25:29 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:55.929 08:25:29 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:55.929 08:25:29 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:55.929 08:25:29 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:55.929 08:25:29 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:55.929 08:25:29 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:55.929 08:25:29 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:55.930 08:25:29 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:28:55.930 08:25:29 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:28:55.930 Cannot find device "nvmf_tgt_br" 00:28:55.930 08:25:29 -- nvmf/common.sh@154 -- # true 00:28:55.930 08:25:29 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:28:55.930 Cannot find device "nvmf_tgt_br2" 00:28:55.930 08:25:29 -- nvmf/common.sh@155 -- # true 00:28:55.930 08:25:29 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:28:55.930 08:25:29 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:28:55.930 Cannot find device "nvmf_tgt_br" 00:28:55.930 08:25:29 -- nvmf/common.sh@157 -- # true 00:28:55.930 08:25:29 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:28:55.930 Cannot find device "nvmf_tgt_br2" 00:28:55.930 08:25:29 -- nvmf/common.sh@158 -- # true 00:28:55.930 08:25:29 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:28:55.930 08:25:29 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:28:55.930 08:25:29 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:55.930 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:55.930 08:25:29 -- nvmf/common.sh@161 -- # true 00:28:55.930 08:25:29 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:56.188 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:56.188 08:25:29 -- nvmf/common.sh@162 -- # true 00:28:56.188 08:25:29 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:28:56.188 08:25:29 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:56.188 08:25:29 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:56.188 08:25:29 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:56.188 08:25:29 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:56.188 08:25:29 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:56.188 08:25:29 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:56.188 08:25:29 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:28:56.188 08:25:29 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:28:56.188 08:25:29 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:28:56.188 08:25:29 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:28:56.188 08:25:29 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:28:56.188 08:25:29 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:28:56.188 08:25:29 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:56.188 08:25:29 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:56.188 08:25:29 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:56.188 08:25:29 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:28:56.188 08:25:29 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:28:56.188 08:25:29 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:28:56.188 08:25:29 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:56.188 08:25:29 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:56.188 08:25:29 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:56.188 08:25:29 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:56.188 08:25:29 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:28:56.188 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:56.188 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.118 ms 00:28:56.188 00:28:56.188 --- 10.0.0.2 ping statistics --- 00:28:56.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:56.188 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:28:56.188 08:25:29 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:28:56.188 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:56.188 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.078 ms 00:28:56.188 00:28:56.188 --- 10.0.0.3 ping statistics --- 00:28:56.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:56.188 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:28:56.188 08:25:29 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:56.188 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:56.188 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:28:56.188 00:28:56.188 --- 10.0.0.1 ping statistics --- 00:28:56.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:56.188 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:28:56.188 08:25:29 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:56.188 08:25:29 -- nvmf/common.sh@421 -- # return 0 00:28:56.188 08:25:29 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:56.188 08:25:29 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:56.188 08:25:29 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:56.188 08:25:29 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:56.188 08:25:29 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:56.189 08:25:29 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:56.189 08:25:29 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:56.189 08:25:29 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:28:56.189 08:25:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:56.189 08:25:29 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:56.189 08:25:29 -- common/autotest_common.sh@10 -- # set +x 00:28:56.189 08:25:29 -- nvmf/common.sh@469 -- # nvmfpid=64720 00:28:56.189 08:25:29 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:28:56.189 08:25:29 -- nvmf/common.sh@470 -- # waitforlisten 64720 00:28:56.189 08:25:29 -- common/autotest_common.sh@819 -- # '[' -z 64720 ']' 00:28:56.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:56.189 08:25:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:56.189 08:25:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:56.189 08:25:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:56.189 08:25:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:56.189 08:25:29 -- common/autotest_common.sh@10 -- # set +x 00:28:56.447 [2024-04-17 08:25:29.566159] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:56.447 [2024-04-17 08:25:29.566220] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:56.447 [2024-04-17 08:25:29.707515] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:56.707 [2024-04-17 08:25:29.801734] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:56.707 [2024-04-17 08:25:29.801875] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:56.707 [2024-04-17 08:25:29.801885] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:56.707 [2024-04-17 08:25:29.801892] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:56.707 [2024-04-17 08:25:29.801921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:57.274 08:25:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:57.274 08:25:30 -- common/autotest_common.sh@852 -- # return 0 00:28:57.274 08:25:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:57.274 08:25:30 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:57.274 08:25:30 -- common/autotest_common.sh@10 -- # set +x 00:28:57.274 08:25:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:57.274 08:25:30 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:28:57.274 08:25:30 -- target/tls.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:28:57.533 true 00:28:57.533 08:25:30 -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:28:57.533 08:25:30 -- target/tls.sh@82 -- # jq -r .tls_version 00:28:57.533 08:25:30 -- target/tls.sh@82 -- # version=0 00:28:57.533 08:25:30 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:28:57.533 08:25:30 -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:28:57.793 08:25:31 -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:28:57.793 08:25:31 -- target/tls.sh@90 -- # jq -r .tls_version 00:28:58.052 08:25:31 -- target/tls.sh@90 -- # version=13 00:28:58.052 08:25:31 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:28:58.052 08:25:31 -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:28:58.320 08:25:31 -- target/tls.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:28:58.320 08:25:31 -- target/tls.sh@98 -- # jq -r .tls_version 00:28:58.593 08:25:31 -- target/tls.sh@98 -- # version=7 00:28:58.593 08:25:31 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:28:58.593 08:25:31 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:28:58.593 08:25:31 -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:28:58.593 08:25:31 -- target/tls.sh@105 -- # ktls=false 00:28:58.593 08:25:31 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:28:58.593 08:25:31 -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:28:58.853 08:25:32 -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:28:58.853 08:25:32 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:28:59.113 08:25:32 -- target/tls.sh@113 -- # ktls=true 00:28:59.113 08:25:32 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:28:59.113 08:25:32 -- target/tls.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:28:59.372 08:25:32 -- target/tls.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:28:59.372 08:25:32 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:28:59.372 08:25:32 -- target/tls.sh@121 -- # ktls=false 00:28:59.372 08:25:32 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:28:59.372 08:25:32 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:28:59.372 08:25:32 -- target/tls.sh@49 -- # local key hash crc 00:28:59.372 08:25:32 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:28:59.372 08:25:32 -- target/tls.sh@51 -- # hash=01 00:28:59.372 08:25:32 -- target/tls.sh@52 -- # gzip -1 -c 00:28:59.372 08:25:32 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:28:59.372 08:25:32 -- target/tls.sh@52 -- # head -c 4 00:28:59.372 08:25:32 -- target/tls.sh@52 -- # tail -c8 00:28:59.631 08:25:32 -- target/tls.sh@52 -- # crc='p$H�' 00:28:59.631 08:25:32 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:28:59.631 08:25:32 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:28:59.631 08:25:32 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:28:59.631 08:25:32 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:28:59.631 08:25:32 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:28:59.631 08:25:32 -- target/tls.sh@49 -- # local key hash crc 00:28:59.631 08:25:32 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:28:59.631 08:25:32 -- target/tls.sh@51 -- # hash=01 00:28:59.631 08:25:32 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:28:59.631 08:25:32 -- target/tls.sh@52 -- # tail -c8 00:28:59.631 08:25:32 -- target/tls.sh@52 -- # gzip -1 -c 00:28:59.631 08:25:32 -- target/tls.sh@52 -- # head -c 4 00:28:59.631 08:25:32 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:28:59.631 08:25:32 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:28:59.631 08:25:32 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:28:59.631 08:25:32 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:28:59.631 08:25:32 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:28:59.631 08:25:32 -- target/tls.sh@130 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:28:59.631 08:25:32 -- target/tls.sh@131 -- # key_2_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:28:59.631 08:25:32 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:28:59.631 08:25:32 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:28:59.631 08:25:32 -- target/tls.sh@136 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:28:59.631 08:25:32 -- target/tls.sh@137 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:28:59.631 08:25:32 -- target/tls.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:28:59.631 08:25:32 -- target/tls.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:28:59.892 08:25:33 -- target/tls.sh@142 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:28:59.892 08:25:33 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:28:59.892 08:25:33 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:29:00.151 [2024-04-17 08:25:33.407268] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:00.151 08:25:33 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:29:00.410 08:25:33 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:29:00.669 [2024-04-17 08:25:33.778609] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:00.669 [2024-04-17 08:25:33.778877] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:00.669 08:25:33 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:29:00.669 malloc0 00:29:00.669 08:25:33 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:29:00.930 08:25:34 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:29:01.189 08:25:34 -- target/tls.sh@146 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:29:13.413 Initializing NVMe Controllers 00:29:13.413 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:13.413 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:13.413 Initialization complete. Launching workers. 00:29:13.413 ======================================================== 00:29:13.413 Latency(us) 00:29:13.413 Device Information : IOPS MiB/s Average min max 00:29:13.413 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11707.66 45.73 5467.50 944.01 14744.73 00:29:13.414 ======================================================== 00:29:13.414 Total : 11707.66 45.73 5467.50 944.01 14744.73 00:29:13.414 00:29:13.414 08:25:44 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:29:13.414 08:25:44 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:29:13.414 08:25:44 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:29:13.414 08:25:44 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:29:13.414 08:25:44 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:29:13.414 08:25:44 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:13.414 08:25:44 -- target/tls.sh@28 -- # bdevperf_pid=64956 00:29:13.414 08:25:44 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:29:13.414 08:25:44 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:13.414 08:25:44 -- target/tls.sh@31 -- # waitforlisten 64956 /var/tmp/bdevperf.sock 00:29:13.414 08:25:44 -- common/autotest_common.sh@819 -- # '[' -z 64956 ']' 00:29:13.414 08:25:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:13.414 08:25:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:13.414 08:25:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:13.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:13.414 08:25:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:13.414 08:25:44 -- common/autotest_common.sh@10 -- # set +x 00:29:13.414 [2024-04-17 08:25:44.650103] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:29:13.414 [2024-04-17 08:25:44.650228] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64956 ] 00:29:13.414 [2024-04-17 08:25:44.788953] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:13.414 [2024-04-17 08:25:44.899102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:13.414 08:25:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:13.414 08:25:45 -- common/autotest_common.sh@852 -- # return 0 00:29:13.414 08:25:45 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:29:13.414 [2024-04-17 08:25:45.714746] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:13.414 TLSTESTn1 00:29:13.414 08:25:45 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:29:13.414 Running I/O for 10 seconds... 00:29:23.390 00:29:23.390 Latency(us) 00:29:23.390 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:23.390 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:29:23.390 Verification LBA range: start 0x0 length 0x2000 00:29:23.390 TLSTESTn1 : 10.02 6137.18 23.97 0.00 0.00 20822.73 4235.51 21520.99 00:29:23.390 =================================================================================================================== 00:29:23.390 Total : 6137.18 23.97 0.00 0.00 20822.73 4235.51 21520.99 00:29:23.390 0 00:29:23.390 08:25:55 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:23.390 08:25:55 -- target/tls.sh@45 -- # killprocess 64956 00:29:23.390 08:25:55 -- common/autotest_common.sh@926 -- # '[' -z 64956 ']' 00:29:23.390 08:25:55 -- common/autotest_common.sh@930 -- # kill -0 64956 00:29:23.390 08:25:55 -- common/autotest_common.sh@931 -- # uname 00:29:23.390 08:25:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:23.390 08:25:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 64956 00:29:23.390 killing process with pid 64956 00:29:23.390 Received shutdown signal, test time was about 10.000000 seconds 00:29:23.390 00:29:23.390 Latency(us) 00:29:23.390 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:23.390 =================================================================================================================== 00:29:23.390 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:23.390 08:25:55 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:29:23.390 08:25:55 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:29:23.390 08:25:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 64956' 00:29:23.390 08:25:55 -- common/autotest_common.sh@945 -- # kill 64956 00:29:23.390 08:25:55 -- common/autotest_common.sh@950 -- # wait 64956 00:29:23.390 08:25:56 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:29:23.390 08:25:56 -- common/autotest_common.sh@640 -- # local es=0 00:29:23.390 08:25:56 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:29:23.390 08:25:56 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:29:23.390 08:25:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:23.390 08:25:56 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:29:23.390 08:25:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:23.390 08:25:56 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:29:23.390 08:25:56 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:29:23.390 08:25:56 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:29:23.390 08:25:56 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:29:23.390 08:25:56 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt' 00:29:23.390 08:25:56 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:23.390 08:25:56 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:29:23.390 08:25:56 -- target/tls.sh@28 -- # bdevperf_pid=65084 00:29:23.390 08:25:56 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:23.390 08:25:56 -- target/tls.sh@31 -- # waitforlisten 65084 /var/tmp/bdevperf.sock 00:29:23.390 08:25:56 -- common/autotest_common.sh@819 -- # '[' -z 65084 ']' 00:29:23.390 08:25:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:23.390 08:25:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:23.390 08:25:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:23.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:23.390 08:25:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:23.390 08:25:56 -- common/autotest_common.sh@10 -- # set +x 00:29:23.390 [2024-04-17 08:25:56.236037] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:29:23.390 [2024-04-17 08:25:56.236175] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65084 ] 00:29:23.390 [2024-04-17 08:25:56.373138] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:23.390 [2024-04-17 08:25:56.473411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:23.957 08:25:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:23.957 08:25:57 -- common/autotest_common.sh@852 -- # return 0 00:29:23.957 08:25:57 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:29:24.216 [2024-04-17 08:25:57.318817] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:24.216 [2024-04-17 08:25:57.327131] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:29:24.216 [2024-04-17 08:25:57.327472] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xac77c0 (107): Transport endpoint is not connected 00:29:24.216 [2024-04-17 08:25:57.328459] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xac77c0 (9): Bad file descriptor 00:29:24.216 [2024-04-17 08:25:57.329455] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:24.216 [2024-04-17 08:25:57.329513] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:29:24.216 [2024-04-17 08:25:57.329557] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:24.216 request: 00:29:24.216 { 00:29:24.216 "name": "TLSTEST", 00:29:24.216 "trtype": "tcp", 00:29:24.216 "traddr": "10.0.0.2", 00:29:24.216 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:24.216 "adrfam": "ipv4", 00:29:24.216 "trsvcid": "4420", 00:29:24.216 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:24.216 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt", 00:29:24.216 "method": "bdev_nvme_attach_controller", 00:29:24.216 "req_id": 1 00:29:24.216 } 00:29:24.216 Got JSON-RPC error response 00:29:24.216 response: 00:29:24.216 { 00:29:24.216 "code": -32602, 00:29:24.216 "message": "Invalid parameters" 00:29:24.216 } 00:29:24.216 08:25:57 -- target/tls.sh@36 -- # killprocess 65084 00:29:24.216 08:25:57 -- common/autotest_common.sh@926 -- # '[' -z 65084 ']' 00:29:24.216 08:25:57 -- common/autotest_common.sh@930 -- # kill -0 65084 00:29:24.216 08:25:57 -- common/autotest_common.sh@931 -- # uname 00:29:24.216 08:25:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:24.216 08:25:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 65084 00:29:24.216 08:25:57 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:29:24.216 08:25:57 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:29:24.216 killing process with pid 65084 00:29:24.216 08:25:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 65084' 00:29:24.216 08:25:57 -- common/autotest_common.sh@945 -- # kill 65084 00:29:24.216 Received shutdown signal, test time was about 10.000000 seconds 00:29:24.216 00:29:24.216 Latency(us) 00:29:24.216 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:24.216 =================================================================================================================== 00:29:24.216 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:29:24.216 08:25:57 -- common/autotest_common.sh@950 -- # wait 65084 00:29:24.475 08:25:57 -- target/tls.sh@37 -- # return 1 00:29:24.475 08:25:57 -- common/autotest_common.sh@643 -- # es=1 00:29:24.475 08:25:57 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:24.475 08:25:57 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:24.475 08:25:57 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:24.475 08:25:57 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:29:24.475 08:25:57 -- common/autotest_common.sh@640 -- # local es=0 00:29:24.475 08:25:57 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:29:24.475 08:25:57 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:29:24.475 08:25:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:24.475 08:25:57 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:29:24.475 08:25:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:24.475 08:25:57 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:29:24.475 08:25:57 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:29:24.475 08:25:57 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:29:24.475 08:25:57 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:29:24.475 08:25:57 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:29:24.475 08:25:57 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:24.475 08:25:57 -- target/tls.sh@28 -- # bdevperf_pid=65112 00:29:24.475 08:25:57 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:29:24.475 08:25:57 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:24.475 08:25:57 -- target/tls.sh@31 -- # waitforlisten 65112 /var/tmp/bdevperf.sock 00:29:24.475 08:25:57 -- common/autotest_common.sh@819 -- # '[' -z 65112 ']' 00:29:24.475 08:25:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:24.475 08:25:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:24.475 08:25:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:24.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:24.475 08:25:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:24.475 08:25:57 -- common/autotest_common.sh@10 -- # set +x 00:29:24.475 [2024-04-17 08:25:57.647483] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:29:24.475 [2024-04-17 08:25:57.647627] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65112 ] 00:29:24.475 [2024-04-17 08:25:57.771395] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:24.735 [2024-04-17 08:25:57.871780] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:25.303 08:25:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:25.303 08:25:58 -- common/autotest_common.sh@852 -- # return 0 00:29:25.303 08:25:58 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:29:25.562 [2024-04-17 08:25:58.764855] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:25.562 [2024-04-17 08:25:58.774443] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:29:25.562 [2024-04-17 08:25:58.774486] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:29:25.562 [2024-04-17 08:25:58.774543] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:29:25.562 [2024-04-17 08:25:58.775374] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11427c0 (107): Transport endpoint is not connected 00:29:25.562 [2024-04-17 08:25:58.776364] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11427c0 (9): Bad file descriptor 00:29:25.562 [2024-04-17 08:25:58.777359] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.562 [2024-04-17 08:25:58.777378] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:29:25.562 [2024-04-17 08:25:58.777389] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.562 request: 00:29:25.562 { 00:29:25.562 "name": "TLSTEST", 00:29:25.562 "trtype": "tcp", 00:29:25.562 "traddr": "10.0.0.2", 00:29:25.562 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:25.562 "adrfam": "ipv4", 00:29:25.562 "trsvcid": "4420", 00:29:25.562 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:25.562 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt", 00:29:25.562 "method": "bdev_nvme_attach_controller", 00:29:25.562 "req_id": 1 00:29:25.562 } 00:29:25.562 Got JSON-RPC error response 00:29:25.562 response: 00:29:25.562 { 00:29:25.562 "code": -32602, 00:29:25.562 "message": "Invalid parameters" 00:29:25.562 } 00:29:25.562 08:25:58 -- target/tls.sh@36 -- # killprocess 65112 00:29:25.562 08:25:58 -- common/autotest_common.sh@926 -- # '[' -z 65112 ']' 00:29:25.562 08:25:58 -- common/autotest_common.sh@930 -- # kill -0 65112 00:29:25.562 08:25:58 -- common/autotest_common.sh@931 -- # uname 00:29:25.562 08:25:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:25.563 08:25:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 65112 00:29:25.563 killing process with pid 65112 00:29:25.563 Received shutdown signal, test time was about 10.000000 seconds 00:29:25.563 00:29:25.563 Latency(us) 00:29:25.563 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:25.563 =================================================================================================================== 00:29:25.563 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:29:25.563 08:25:58 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:29:25.563 08:25:58 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:29:25.563 08:25:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 65112' 00:29:25.563 08:25:58 -- common/autotest_common.sh@945 -- # kill 65112 00:29:25.563 08:25:58 -- common/autotest_common.sh@950 -- # wait 65112 00:29:25.821 08:25:59 -- target/tls.sh@37 -- # return 1 00:29:25.821 08:25:59 -- common/autotest_common.sh@643 -- # es=1 00:29:25.821 08:25:59 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:25.821 08:25:59 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:25.821 08:25:59 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:25.821 08:25:59 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:29:25.821 08:25:59 -- common/autotest_common.sh@640 -- # local es=0 00:29:25.821 08:25:59 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:29:25.821 08:25:59 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:29:25.821 08:25:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:25.821 08:25:59 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:29:25.821 08:25:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:25.821 08:25:59 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:29:25.821 08:25:59 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:29:25.821 08:25:59 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:29:25.821 08:25:59 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:29:25.821 08:25:59 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:29:25.821 08:25:59 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:25.821 08:25:59 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:29:25.821 08:25:59 -- target/tls.sh@28 -- # bdevperf_pid=65139 00:29:25.821 08:25:59 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:25.821 08:25:59 -- target/tls.sh@31 -- # waitforlisten 65139 /var/tmp/bdevperf.sock 00:29:25.821 08:25:59 -- common/autotest_common.sh@819 -- # '[' -z 65139 ']' 00:29:25.821 08:25:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:25.821 08:25:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:25.821 08:25:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:25.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:25.821 08:25:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:25.821 08:25:59 -- common/autotest_common.sh@10 -- # set +x 00:29:25.821 [2024-04-17 08:25:59.093749] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:29:25.821 [2024-04-17 08:25:59.093899] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65139 ] 00:29:26.080 [2024-04-17 08:25:59.233984] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:26.080 [2024-04-17 08:25:59.334246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:26.647 08:25:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:26.647 08:25:59 -- common/autotest_common.sh@852 -- # return 0 00:29:26.647 08:25:59 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:29:26.907 [2024-04-17 08:26:00.145061] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:26.907 [2024-04-17 08:26:00.151965] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:29:26.907 [2024-04-17 08:26:00.152094] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:29:26.907 [2024-04-17 08:26:00.152202] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:29:26.907 [2024-04-17 08:26:00.152650] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19a17c0 (107): Transport endpoint is not connected 00:29:26.907 [2024-04-17 08:26:00.153638] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19a17c0 (9): Bad file descriptor 00:29:26.907 [2024-04-17 08:26:00.154633] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:29:26.907 [2024-04-17 08:26:00.154687] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:29:26.907 [2024-04-17 08:26:00.154732] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:29:26.907 request: 00:29:26.907 { 00:29:26.907 "name": "TLSTEST", 00:29:26.907 "trtype": "tcp", 00:29:26.907 "traddr": "10.0.0.2", 00:29:26.907 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:26.907 "adrfam": "ipv4", 00:29:26.907 "trsvcid": "4420", 00:29:26.907 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:26.907 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt", 00:29:26.907 "method": "bdev_nvme_attach_controller", 00:29:26.907 "req_id": 1 00:29:26.907 } 00:29:26.907 Got JSON-RPC error response 00:29:26.907 response: 00:29:26.907 { 00:29:26.907 "code": -32602, 00:29:26.907 "message": "Invalid parameters" 00:29:26.907 } 00:29:26.907 08:26:00 -- target/tls.sh@36 -- # killprocess 65139 00:29:26.907 08:26:00 -- common/autotest_common.sh@926 -- # '[' -z 65139 ']' 00:29:26.907 08:26:00 -- common/autotest_common.sh@930 -- # kill -0 65139 00:29:26.907 08:26:00 -- common/autotest_common.sh@931 -- # uname 00:29:26.907 08:26:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:26.907 08:26:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 65139 00:29:26.907 killing process with pid 65139 00:29:26.907 Received shutdown signal, test time was about 10.000000 seconds 00:29:26.907 00:29:26.907 Latency(us) 00:29:26.907 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:26.907 =================================================================================================================== 00:29:26.907 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:29:26.907 08:26:00 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:29:26.907 08:26:00 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:29:26.907 08:26:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 65139' 00:29:26.907 08:26:00 -- common/autotest_common.sh@945 -- # kill 65139 00:29:26.907 08:26:00 -- common/autotest_common.sh@950 -- # wait 65139 00:29:27.167 08:26:00 -- target/tls.sh@37 -- # return 1 00:29:27.167 08:26:00 -- common/autotest_common.sh@643 -- # es=1 00:29:27.167 08:26:00 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:27.167 08:26:00 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:27.167 08:26:00 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:27.167 08:26:00 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:29:27.167 08:26:00 -- common/autotest_common.sh@640 -- # local es=0 00:29:27.167 08:26:00 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:29:27.167 08:26:00 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:29:27.167 08:26:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:27.167 08:26:00 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:29:27.167 08:26:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:27.167 08:26:00 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:29:27.167 08:26:00 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:29:27.167 08:26:00 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:29:27.167 08:26:00 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:29:27.167 08:26:00 -- target/tls.sh@23 -- # psk= 00:29:27.167 08:26:00 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:27.167 08:26:00 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:29:27.167 08:26:00 -- target/tls.sh@28 -- # bdevperf_pid=65167 00:29:27.167 08:26:00 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:27.167 08:26:00 -- target/tls.sh@31 -- # waitforlisten 65167 /var/tmp/bdevperf.sock 00:29:27.167 08:26:00 -- common/autotest_common.sh@819 -- # '[' -z 65167 ']' 00:29:27.167 08:26:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:27.167 08:26:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:27.167 08:26:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:27.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:27.167 08:26:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:27.167 08:26:00 -- common/autotest_common.sh@10 -- # set +x 00:29:27.167 [2024-04-17 08:26:00.490935] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:29:27.167 [2024-04-17 08:26:00.491120] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65167 ] 00:29:27.425 [2024-04-17 08:26:00.623290] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:27.425 [2024-04-17 08:26:00.721812] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:28.361 08:26:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:28.361 08:26:01 -- common/autotest_common.sh@852 -- # return 0 00:29:28.361 08:26:01 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:29:28.361 [2024-04-17 08:26:01.559452] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:29:28.361 [2024-04-17 08:26:01.561391] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153a000 (9): Bad file descriptor 00:29:28.361 [2024-04-17 08:26:01.562384] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.361 [2024-04-17 08:26:01.562475] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:29:28.361 [2024-04-17 08:26:01.562530] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.361 request: 00:29:28.361 { 00:29:28.361 "name": "TLSTEST", 00:29:28.361 "trtype": "tcp", 00:29:28.361 "traddr": "10.0.0.2", 00:29:28.361 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:28.361 "adrfam": "ipv4", 00:29:28.361 "trsvcid": "4420", 00:29:28.361 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:28.361 "method": "bdev_nvme_attach_controller", 00:29:28.361 "req_id": 1 00:29:28.361 } 00:29:28.361 Got JSON-RPC error response 00:29:28.361 response: 00:29:28.361 { 00:29:28.361 "code": -32602, 00:29:28.361 "message": "Invalid parameters" 00:29:28.361 } 00:29:28.361 08:26:01 -- target/tls.sh@36 -- # killprocess 65167 00:29:28.361 08:26:01 -- common/autotest_common.sh@926 -- # '[' -z 65167 ']' 00:29:28.361 08:26:01 -- common/autotest_common.sh@930 -- # kill -0 65167 00:29:28.361 08:26:01 -- common/autotest_common.sh@931 -- # uname 00:29:28.361 08:26:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:28.361 08:26:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 65167 00:29:28.361 killing process with pid 65167 00:29:28.361 Received shutdown signal, test time was about 10.000000 seconds 00:29:28.361 00:29:28.361 Latency(us) 00:29:28.361 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:28.361 =================================================================================================================== 00:29:28.361 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:29:28.361 08:26:01 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:29:28.361 08:26:01 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:29:28.361 08:26:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 65167' 00:29:28.361 08:26:01 -- common/autotest_common.sh@945 -- # kill 65167 00:29:28.361 08:26:01 -- common/autotest_common.sh@950 -- # wait 65167 00:29:28.619 08:26:01 -- target/tls.sh@37 -- # return 1 00:29:28.619 08:26:01 -- common/autotest_common.sh@643 -- # es=1 00:29:28.619 08:26:01 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:28.619 08:26:01 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:28.619 08:26:01 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:28.619 08:26:01 -- target/tls.sh@167 -- # killprocess 64720 00:29:28.619 08:26:01 -- common/autotest_common.sh@926 -- # '[' -z 64720 ']' 00:29:28.619 08:26:01 -- common/autotest_common.sh@930 -- # kill -0 64720 00:29:28.619 08:26:01 -- common/autotest_common.sh@931 -- # uname 00:29:28.619 08:26:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:28.619 08:26:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 64720 00:29:28.619 08:26:01 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:28.619 08:26:01 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:28.619 08:26:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 64720' 00:29:28.619 killing process with pid 64720 00:29:28.619 08:26:01 -- common/autotest_common.sh@945 -- # kill 64720 00:29:28.619 08:26:01 -- common/autotest_common.sh@950 -- # wait 64720 00:29:28.877 08:26:02 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:29:28.877 08:26:02 -- target/tls.sh@49 -- # local key hash crc 00:29:28.878 08:26:02 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:29:28.878 08:26:02 -- target/tls.sh@51 -- # hash=02 00:29:28.878 08:26:02 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:29:28.878 08:26:02 -- target/tls.sh@52 -- # gzip -1 -c 00:29:28.878 08:26:02 -- target/tls.sh@52 -- # tail -c8 00:29:28.878 08:26:02 -- target/tls.sh@52 -- # head -c 4 00:29:28.878 08:26:02 -- target/tls.sh@52 -- # crc='�e�'\''' 00:29:28.878 08:26:02 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:29:28.878 08:26:02 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:29:28.878 08:26:02 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:29:28.878 08:26:02 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:29:28.878 08:26:02 -- target/tls.sh@169 -- # key_long_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:29:28.878 08:26:02 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:29:28.878 08:26:02 -- target/tls.sh@171 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:29:28.878 08:26:02 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:29:28.878 08:26:02 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:28.878 08:26:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:28.878 08:26:02 -- common/autotest_common.sh@10 -- # set +x 00:29:28.878 08:26:02 -- nvmf/common.sh@469 -- # nvmfpid=65208 00:29:28.878 08:26:02 -- nvmf/common.sh@470 -- # waitforlisten 65208 00:29:28.878 08:26:02 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:29:28.878 08:26:02 -- common/autotest_common.sh@819 -- # '[' -z 65208 ']' 00:29:28.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:28.878 08:26:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:28.878 08:26:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:28.878 08:26:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:28.878 08:26:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:28.878 08:26:02 -- common/autotest_common.sh@10 -- # set +x 00:29:28.878 [2024-04-17 08:26:02.183655] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:29:28.878 [2024-04-17 08:26:02.183799] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:29.136 [2024-04-17 08:26:02.322611] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:29.136 [2024-04-17 08:26:02.426419] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:29.136 [2024-04-17 08:26:02.426636] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:29.136 [2024-04-17 08:26:02.426676] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:29.136 [2024-04-17 08:26:02.426755] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:29.136 [2024-04-17 08:26:02.426827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:30.070 08:26:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:30.070 08:26:03 -- common/autotest_common.sh@852 -- # return 0 00:29:30.070 08:26:03 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:30.070 08:26:03 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:30.070 08:26:03 -- common/autotest_common.sh@10 -- # set +x 00:29:30.070 08:26:03 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:30.070 08:26:03 -- target/tls.sh@174 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:29:30.070 08:26:03 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:29:30.070 08:26:03 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:29:30.071 [2024-04-17 08:26:03.362470] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:30.071 08:26:03 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:29:30.329 08:26:03 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:29:30.587 [2024-04-17 08:26:03.765750] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:30.587 [2024-04-17 08:26:03.765956] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:30.587 08:26:03 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:29:30.845 malloc0 00:29:30.845 08:26:04 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:29:31.104 08:26:04 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:29:31.363 08:26:04 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:29:31.363 08:26:04 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:29:31.363 08:26:04 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:29:31.363 08:26:04 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:29:31.363 08:26:04 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:29:31.363 08:26:04 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:31.363 08:26:04 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:29:31.363 08:26:04 -- target/tls.sh@28 -- # bdevperf_pid=65263 00:29:31.363 08:26:04 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:31.363 08:26:04 -- target/tls.sh@31 -- # waitforlisten 65263 /var/tmp/bdevperf.sock 00:29:31.363 08:26:04 -- common/autotest_common.sh@819 -- # '[' -z 65263 ']' 00:29:31.363 08:26:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:31.363 08:26:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:31.363 08:26:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:31.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:31.363 08:26:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:31.363 08:26:04 -- common/autotest_common.sh@10 -- # set +x 00:29:31.363 [2024-04-17 08:26:04.475403] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:29:31.363 [2024-04-17 08:26:04.475546] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65263 ] 00:29:31.363 [2024-04-17 08:26:04.601294] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:31.621 [2024-04-17 08:26:04.701445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:32.188 08:26:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:32.188 08:26:05 -- common/autotest_common.sh@852 -- # return 0 00:29:32.188 08:26:05 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:29:32.447 [2024-04-17 08:26:05.590676] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:32.447 TLSTESTn1 00:29:32.447 08:26:05 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:29:32.707 Running I/O for 10 seconds... 00:29:42.688 00:29:42.688 Latency(us) 00:29:42.688 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:42.688 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:29:42.688 Verification LBA range: start 0x0 length 0x2000 00:29:42.689 TLSTESTn1 : 10.02 6163.57 24.08 0.00 0.00 20732.87 4321.37 20261.79 00:29:42.689 =================================================================================================================== 00:29:42.689 Total : 6163.57 24.08 0.00 0.00 20732.87 4321.37 20261.79 00:29:42.689 0 00:29:42.689 08:26:15 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:42.689 08:26:15 -- target/tls.sh@45 -- # killprocess 65263 00:29:42.689 08:26:15 -- common/autotest_common.sh@926 -- # '[' -z 65263 ']' 00:29:42.689 08:26:15 -- common/autotest_common.sh@930 -- # kill -0 65263 00:29:42.689 08:26:15 -- common/autotest_common.sh@931 -- # uname 00:29:42.689 08:26:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:42.689 08:26:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 65263 00:29:42.689 08:26:15 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:29:42.689 08:26:15 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:29:42.689 08:26:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 65263' 00:29:42.689 killing process with pid 65263 00:29:42.689 08:26:15 -- common/autotest_common.sh@945 -- # kill 65263 00:29:42.689 Received shutdown signal, test time was about 10.000000 seconds 00:29:42.689 00:29:42.689 Latency(us) 00:29:42.689 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:42.689 =================================================================================================================== 00:29:42.689 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:42.689 08:26:15 -- common/autotest_common.sh@950 -- # wait 65263 00:29:42.948 08:26:16 -- target/tls.sh@179 -- # chmod 0666 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:29:42.948 08:26:16 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:29:42.948 08:26:16 -- common/autotest_common.sh@640 -- # local es=0 00:29:42.948 08:26:16 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:29:42.948 08:26:16 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:29:42.948 08:26:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:42.948 08:26:16 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:29:42.948 08:26:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:42.948 08:26:16 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:29:42.948 08:26:16 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:29:42.948 08:26:16 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:29:42.948 08:26:16 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:29:42.948 08:26:16 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:29:42.948 08:26:16 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:42.948 08:26:16 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:29:42.948 08:26:16 -- target/tls.sh@28 -- # bdevperf_pid=65393 00:29:42.948 08:26:16 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:42.948 08:26:16 -- target/tls.sh@31 -- # waitforlisten 65393 /var/tmp/bdevperf.sock 00:29:42.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:42.948 08:26:16 -- common/autotest_common.sh@819 -- # '[' -z 65393 ']' 00:29:42.948 08:26:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:42.948 08:26:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:42.948 08:26:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:42.948 08:26:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:42.948 08:26:16 -- common/autotest_common.sh@10 -- # set +x 00:29:42.948 [2024-04-17 08:26:16.137542] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:29:42.948 [2024-04-17 08:26:16.137663] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65393 ] 00:29:42.948 [2024-04-17 08:26:16.275498] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:43.208 [2024-04-17 08:26:16.375092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:43.776 08:26:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:43.776 08:26:17 -- common/autotest_common.sh@852 -- # return 0 00:29:43.776 08:26:17 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:29:44.036 [2024-04-17 08:26:17.259881] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:44.036 [2024-04-17 08:26:17.260036] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:29:44.036 request: 00:29:44.036 { 00:29:44.036 "name": "TLSTEST", 00:29:44.036 "trtype": "tcp", 00:29:44.036 "traddr": "10.0.0.2", 00:29:44.036 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:44.036 "adrfam": "ipv4", 00:29:44.036 "trsvcid": "4420", 00:29:44.036 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:44.036 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:29:44.036 "method": "bdev_nvme_attach_controller", 00:29:44.036 "req_id": 1 00:29:44.036 } 00:29:44.036 Got JSON-RPC error response 00:29:44.036 response: 00:29:44.036 { 00:29:44.036 "code": -22, 00:29:44.036 "message": "Could not retrieve PSK from file: /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:29:44.036 } 00:29:44.036 08:26:17 -- target/tls.sh@36 -- # killprocess 65393 00:29:44.036 08:26:17 -- common/autotest_common.sh@926 -- # '[' -z 65393 ']' 00:29:44.036 08:26:17 -- common/autotest_common.sh@930 -- # kill -0 65393 00:29:44.036 08:26:17 -- common/autotest_common.sh@931 -- # uname 00:29:44.036 08:26:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:44.036 08:26:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 65393 00:29:44.036 killing process with pid 65393 00:29:44.036 Received shutdown signal, test time was about 10.000000 seconds 00:29:44.036 00:29:44.036 Latency(us) 00:29:44.036 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:44.036 =================================================================================================================== 00:29:44.036 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:29:44.036 08:26:17 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:29:44.036 08:26:17 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:29:44.036 08:26:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 65393' 00:29:44.036 08:26:17 -- common/autotest_common.sh@945 -- # kill 65393 00:29:44.036 08:26:17 -- common/autotest_common.sh@950 -- # wait 65393 00:29:44.295 08:26:17 -- target/tls.sh@37 -- # return 1 00:29:44.295 08:26:17 -- common/autotest_common.sh@643 -- # es=1 00:29:44.295 08:26:17 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:44.295 08:26:17 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:44.295 08:26:17 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:44.295 08:26:17 -- target/tls.sh@183 -- # killprocess 65208 00:29:44.295 08:26:17 -- common/autotest_common.sh@926 -- # '[' -z 65208 ']' 00:29:44.295 08:26:17 -- common/autotest_common.sh@930 -- # kill -0 65208 00:29:44.295 08:26:17 -- common/autotest_common.sh@931 -- # uname 00:29:44.295 08:26:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:44.295 08:26:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 65208 00:29:44.295 08:26:17 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:44.295 08:26:17 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:44.295 08:26:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 65208' 00:29:44.295 killing process with pid 65208 00:29:44.295 08:26:17 -- common/autotest_common.sh@945 -- # kill 65208 00:29:44.295 08:26:17 -- common/autotest_common.sh@950 -- # wait 65208 00:29:44.554 08:26:17 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:29:44.554 08:26:17 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:44.554 08:26:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:44.554 08:26:17 -- common/autotest_common.sh@10 -- # set +x 00:29:44.554 08:26:17 -- nvmf/common.sh@469 -- # nvmfpid=65431 00:29:44.554 08:26:17 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:29:44.554 08:26:17 -- nvmf/common.sh@470 -- # waitforlisten 65431 00:29:44.554 08:26:17 -- common/autotest_common.sh@819 -- # '[' -z 65431 ']' 00:29:44.554 08:26:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:44.554 08:26:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:44.554 08:26:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:44.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:44.554 08:26:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:44.554 08:26:17 -- common/autotest_common.sh@10 -- # set +x 00:29:44.554 [2024-04-17 08:26:17.868752] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:29:44.554 [2024-04-17 08:26:17.868920] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:44.813 [2024-04-17 08:26:18.011228] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:44.813 [2024-04-17 08:26:18.123222] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:44.813 [2024-04-17 08:26:18.123405] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:44.813 [2024-04-17 08:26:18.123417] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:44.813 [2024-04-17 08:26:18.123426] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:44.813 [2024-04-17 08:26:18.123458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:45.751 08:26:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:45.751 08:26:18 -- common/autotest_common.sh@852 -- # return 0 00:29:45.751 08:26:18 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:45.751 08:26:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:45.751 08:26:18 -- common/autotest_common.sh@10 -- # set +x 00:29:45.751 08:26:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:45.751 08:26:18 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:29:45.752 08:26:18 -- common/autotest_common.sh@640 -- # local es=0 00:29:45.752 08:26:18 -- common/autotest_common.sh@642 -- # valid_exec_arg setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:29:45.752 08:26:18 -- common/autotest_common.sh@628 -- # local arg=setup_nvmf_tgt 00:29:45.752 08:26:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:45.752 08:26:18 -- common/autotest_common.sh@632 -- # type -t setup_nvmf_tgt 00:29:45.752 08:26:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:45.752 08:26:18 -- common/autotest_common.sh@643 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:29:45.752 08:26:18 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:29:45.752 08:26:18 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:29:45.752 [2024-04-17 08:26:19.019501] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:45.752 08:26:19 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:29:46.010 08:26:19 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:29:46.269 [2024-04-17 08:26:19.458805] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:46.269 [2024-04-17 08:26:19.459026] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:46.269 08:26:19 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:29:46.529 malloc0 00:29:46.529 08:26:19 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:29:46.788 08:26:19 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:29:46.788 [2024-04-17 08:26:20.111358] tcp.c:3549:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:29:46.788 [2024-04-17 08:26:20.111409] tcp.c:3618:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:29:46.788 [2024-04-17 08:26:20.111428] subsystem.c: 840:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:29:46.788 request: 00:29:46.788 { 00:29:46.788 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:46.788 "host": "nqn.2016-06.io.spdk:host1", 00:29:46.788 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:29:46.788 "method": "nvmf_subsystem_add_host", 00:29:46.788 "req_id": 1 00:29:46.788 } 00:29:46.788 Got JSON-RPC error response 00:29:46.788 response: 00:29:46.788 { 00:29:46.788 "code": -32603, 00:29:46.788 "message": "Internal error" 00:29:46.788 } 00:29:47.046 08:26:20 -- common/autotest_common.sh@643 -- # es=1 00:29:47.046 08:26:20 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:47.046 08:26:20 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:47.046 08:26:20 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:47.046 08:26:20 -- target/tls.sh@189 -- # killprocess 65431 00:29:47.046 08:26:20 -- common/autotest_common.sh@926 -- # '[' -z 65431 ']' 00:29:47.046 08:26:20 -- common/autotest_common.sh@930 -- # kill -0 65431 00:29:47.046 08:26:20 -- common/autotest_common.sh@931 -- # uname 00:29:47.046 08:26:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:47.046 08:26:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 65431 00:29:47.046 killing process with pid 65431 00:29:47.046 08:26:20 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:47.046 08:26:20 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:47.046 08:26:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 65431' 00:29:47.046 08:26:20 -- common/autotest_common.sh@945 -- # kill 65431 00:29:47.046 08:26:20 -- common/autotest_common.sh@950 -- # wait 65431 00:29:47.306 08:26:20 -- target/tls.sh@190 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:29:47.306 08:26:20 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:29:47.306 08:26:20 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:47.306 08:26:20 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:47.306 08:26:20 -- common/autotest_common.sh@10 -- # set +x 00:29:47.306 08:26:20 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:29:47.306 08:26:20 -- nvmf/common.sh@469 -- # nvmfpid=65490 00:29:47.306 08:26:20 -- nvmf/common.sh@470 -- # waitforlisten 65490 00:29:47.306 08:26:20 -- common/autotest_common.sh@819 -- # '[' -z 65490 ']' 00:29:47.306 08:26:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:47.306 08:26:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:47.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:47.306 08:26:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:47.306 08:26:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:47.306 08:26:20 -- common/autotest_common.sh@10 -- # set +x 00:29:47.306 [2024-04-17 08:26:20.475339] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:29:47.306 [2024-04-17 08:26:20.475426] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:47.306 [2024-04-17 08:26:20.622420] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:47.566 [2024-04-17 08:26:20.722780] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:47.566 [2024-04-17 08:26:20.722925] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:47.566 [2024-04-17 08:26:20.722934] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:47.566 [2024-04-17 08:26:20.722941] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:47.566 [2024-04-17 08:26:20.722975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:48.132 08:26:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:48.132 08:26:21 -- common/autotest_common.sh@852 -- # return 0 00:29:48.132 08:26:21 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:48.132 08:26:21 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:48.132 08:26:21 -- common/autotest_common.sh@10 -- # set +x 00:29:48.132 08:26:21 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:48.132 08:26:21 -- target/tls.sh@194 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:29:48.132 08:26:21 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:29:48.132 08:26:21 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:29:48.390 [2024-04-17 08:26:21.582743] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:48.390 08:26:21 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:29:48.649 08:26:21 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:29:48.989 [2024-04-17 08:26:21.990033] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:48.989 [2024-04-17 08:26:21.990245] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:48.989 08:26:22 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:29:48.989 malloc0 00:29:48.989 08:26:22 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:29:49.263 08:26:22 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:29:49.522 08:26:22 -- target/tls.sh@197 -- # bdevperf_pid=65539 00:29:49.522 08:26:22 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:49.522 08:26:22 -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:29:49.522 08:26:22 -- target/tls.sh@200 -- # waitforlisten 65539 /var/tmp/bdevperf.sock 00:29:49.522 08:26:22 -- common/autotest_common.sh@819 -- # '[' -z 65539 ']' 00:29:49.522 08:26:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:49.522 08:26:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:49.522 08:26:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:49.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:49.522 08:26:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:49.522 08:26:22 -- common/autotest_common.sh@10 -- # set +x 00:29:49.522 [2024-04-17 08:26:22.703653] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:29:49.522 [2024-04-17 08:26:22.703729] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65539 ] 00:29:49.522 [2024-04-17 08:26:22.842273] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:49.780 [2024-04-17 08:26:22.944710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:50.348 08:26:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:50.348 08:26:23 -- common/autotest_common.sh@852 -- # return 0 00:29:50.348 08:26:23 -- target/tls.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:29:50.606 [2024-04-17 08:26:23.785967] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:50.606 TLSTESTn1 00:29:50.606 08:26:23 -- target/tls.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:29:51.173 08:26:24 -- target/tls.sh@205 -- # tgtconf='{ 00:29:51.173 "subsystems": [ 00:29:51.173 { 00:29:51.173 "subsystem": "iobuf", 00:29:51.173 "config": [ 00:29:51.173 { 00:29:51.173 "method": "iobuf_set_options", 00:29:51.173 "params": { 00:29:51.173 "small_pool_count": 8192, 00:29:51.173 "large_pool_count": 1024, 00:29:51.173 "small_bufsize": 8192, 00:29:51.173 "large_bufsize": 135168 00:29:51.173 } 00:29:51.173 } 00:29:51.173 ] 00:29:51.173 }, 00:29:51.173 { 00:29:51.173 "subsystem": "sock", 00:29:51.173 "config": [ 00:29:51.173 { 00:29:51.173 "method": "sock_impl_set_options", 00:29:51.173 "params": { 00:29:51.173 "impl_name": "uring", 00:29:51.173 "recv_buf_size": 2097152, 00:29:51.173 "send_buf_size": 2097152, 00:29:51.173 "enable_recv_pipe": true, 00:29:51.173 "enable_quickack": false, 00:29:51.173 "enable_placement_id": 0, 00:29:51.173 "enable_zerocopy_send_server": false, 00:29:51.173 "enable_zerocopy_send_client": false, 00:29:51.173 "zerocopy_threshold": 0, 00:29:51.173 "tls_version": 0, 00:29:51.173 "enable_ktls": false 00:29:51.173 } 00:29:51.173 }, 00:29:51.173 { 00:29:51.173 "method": "sock_impl_set_options", 00:29:51.173 "params": { 00:29:51.173 "impl_name": "posix", 00:29:51.173 "recv_buf_size": 2097152, 00:29:51.173 "send_buf_size": 2097152, 00:29:51.173 "enable_recv_pipe": true, 00:29:51.173 "enable_quickack": false, 00:29:51.173 "enable_placement_id": 0, 00:29:51.173 "enable_zerocopy_send_server": true, 00:29:51.173 "enable_zerocopy_send_client": false, 00:29:51.173 "zerocopy_threshold": 0, 00:29:51.173 "tls_version": 0, 00:29:51.173 "enable_ktls": false 00:29:51.173 } 00:29:51.173 }, 00:29:51.173 { 00:29:51.173 "method": "sock_impl_set_options", 00:29:51.173 "params": { 00:29:51.173 "impl_name": "ssl", 00:29:51.173 "recv_buf_size": 4096, 00:29:51.173 "send_buf_size": 4096, 00:29:51.173 "enable_recv_pipe": true, 00:29:51.173 "enable_quickack": false, 00:29:51.173 "enable_placement_id": 0, 00:29:51.173 "enable_zerocopy_send_server": true, 00:29:51.173 "enable_zerocopy_send_client": false, 00:29:51.173 "zerocopy_threshold": 0, 00:29:51.173 "tls_version": 0, 00:29:51.173 "enable_ktls": false 00:29:51.173 } 00:29:51.173 } 00:29:51.173 ] 00:29:51.173 }, 00:29:51.173 { 00:29:51.173 "subsystem": "vmd", 00:29:51.173 "config": [] 00:29:51.173 }, 00:29:51.173 { 00:29:51.173 "subsystem": "accel", 00:29:51.173 "config": [ 00:29:51.173 { 00:29:51.173 "method": "accel_set_options", 00:29:51.173 "params": { 00:29:51.173 "small_cache_size": 128, 00:29:51.173 "large_cache_size": 16, 00:29:51.173 "task_count": 2048, 00:29:51.173 "sequence_count": 2048, 00:29:51.173 "buf_count": 2048 00:29:51.173 } 00:29:51.173 } 00:29:51.173 ] 00:29:51.173 }, 00:29:51.173 { 00:29:51.173 "subsystem": "bdev", 00:29:51.173 "config": [ 00:29:51.173 { 00:29:51.173 "method": "bdev_set_options", 00:29:51.173 "params": { 00:29:51.173 "bdev_io_pool_size": 65535, 00:29:51.173 "bdev_io_cache_size": 256, 00:29:51.173 "bdev_auto_examine": true, 00:29:51.173 "iobuf_small_cache_size": 128, 00:29:51.173 "iobuf_large_cache_size": 16 00:29:51.173 } 00:29:51.173 }, 00:29:51.173 { 00:29:51.173 "method": "bdev_raid_set_options", 00:29:51.173 "params": { 00:29:51.173 "process_window_size_kb": 1024 00:29:51.173 } 00:29:51.173 }, 00:29:51.173 { 00:29:51.173 "method": "bdev_iscsi_set_options", 00:29:51.173 "params": { 00:29:51.173 "timeout_sec": 30 00:29:51.173 } 00:29:51.173 }, 00:29:51.173 { 00:29:51.173 "method": "bdev_nvme_set_options", 00:29:51.173 "params": { 00:29:51.173 "action_on_timeout": "none", 00:29:51.173 "timeout_us": 0, 00:29:51.173 "timeout_admin_us": 0, 00:29:51.173 "keep_alive_timeout_ms": 10000, 00:29:51.173 "transport_retry_count": 4, 00:29:51.173 "arbitration_burst": 0, 00:29:51.173 "low_priority_weight": 0, 00:29:51.173 "medium_priority_weight": 0, 00:29:51.173 "high_priority_weight": 0, 00:29:51.173 "nvme_adminq_poll_period_us": 10000, 00:29:51.173 "nvme_ioq_poll_period_us": 0, 00:29:51.173 "io_queue_requests": 0, 00:29:51.173 "delay_cmd_submit": true, 00:29:51.173 "bdev_retry_count": 3, 00:29:51.173 "transport_ack_timeout": 0, 00:29:51.173 "ctrlr_loss_timeout_sec": 0, 00:29:51.173 "reconnect_delay_sec": 0, 00:29:51.173 "fast_io_fail_timeout_sec": 0, 00:29:51.173 "generate_uuids": false, 00:29:51.173 "transport_tos": 0, 00:29:51.173 "io_path_stat": false, 00:29:51.173 "allow_accel_sequence": false 00:29:51.173 } 00:29:51.173 }, 00:29:51.173 { 00:29:51.173 "method": "bdev_nvme_set_hotplug", 00:29:51.173 "params": { 00:29:51.173 "period_us": 100000, 00:29:51.173 "enable": false 00:29:51.173 } 00:29:51.173 }, 00:29:51.173 { 00:29:51.173 "method": "bdev_malloc_create", 00:29:51.173 "params": { 00:29:51.173 "name": "malloc0", 00:29:51.173 "num_blocks": 8192, 00:29:51.173 "block_size": 4096, 00:29:51.173 "physical_block_size": 4096, 00:29:51.173 "uuid": "a44c994e-7552-4fac-8802-85038d922c5c", 00:29:51.173 "optimal_io_boundary": 0 00:29:51.174 } 00:29:51.174 }, 00:29:51.174 { 00:29:51.174 "method": "bdev_wait_for_examine" 00:29:51.174 } 00:29:51.174 ] 00:29:51.174 }, 00:29:51.174 { 00:29:51.174 "subsystem": "nbd", 00:29:51.174 "config": [] 00:29:51.174 }, 00:29:51.174 { 00:29:51.174 "subsystem": "scheduler", 00:29:51.174 "config": [ 00:29:51.174 { 00:29:51.174 "method": "framework_set_scheduler", 00:29:51.174 "params": { 00:29:51.174 "name": "static" 00:29:51.174 } 00:29:51.174 } 00:29:51.174 ] 00:29:51.174 }, 00:29:51.174 { 00:29:51.174 "subsystem": "nvmf", 00:29:51.174 "config": [ 00:29:51.174 { 00:29:51.174 "method": "nvmf_set_config", 00:29:51.174 "params": { 00:29:51.174 "discovery_filter": "match_any", 00:29:51.174 "admin_cmd_passthru": { 00:29:51.174 "identify_ctrlr": false 00:29:51.174 } 00:29:51.174 } 00:29:51.174 }, 00:29:51.174 { 00:29:51.174 "method": "nvmf_set_max_subsystems", 00:29:51.174 "params": { 00:29:51.174 "max_subsystems": 1024 00:29:51.174 } 00:29:51.174 }, 00:29:51.174 { 00:29:51.174 "method": "nvmf_set_crdt", 00:29:51.174 "params": { 00:29:51.174 "crdt1": 0, 00:29:51.174 "crdt2": 0, 00:29:51.174 "crdt3": 0 00:29:51.174 } 00:29:51.174 }, 00:29:51.174 { 00:29:51.174 "method": "nvmf_create_transport", 00:29:51.174 "params": { 00:29:51.174 "trtype": "TCP", 00:29:51.174 "max_queue_depth": 128, 00:29:51.174 "max_io_qpairs_per_ctrlr": 127, 00:29:51.174 "in_capsule_data_size": 4096, 00:29:51.174 "max_io_size": 131072, 00:29:51.174 "io_unit_size": 131072, 00:29:51.174 "max_aq_depth": 128, 00:29:51.174 "num_shared_buffers": 511, 00:29:51.174 "buf_cache_size": 4294967295, 00:29:51.174 "dif_insert_or_strip": false, 00:29:51.174 "zcopy": false, 00:29:51.174 "c2h_success": false, 00:29:51.174 "sock_priority": 0, 00:29:51.174 "abort_timeout_sec": 1 00:29:51.174 } 00:29:51.174 }, 00:29:51.174 { 00:29:51.174 "method": "nvmf_create_subsystem", 00:29:51.174 "params": { 00:29:51.174 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:51.174 "allow_any_host": false, 00:29:51.174 "serial_number": "SPDK00000000000001", 00:29:51.174 "model_number": "SPDK bdev Controller", 00:29:51.174 "max_namespaces": 10, 00:29:51.174 "min_cntlid": 1, 00:29:51.174 "max_cntlid": 65519, 00:29:51.174 "ana_reporting": false 00:29:51.174 } 00:29:51.174 }, 00:29:51.174 { 00:29:51.174 "method": "nvmf_subsystem_add_host", 00:29:51.174 "params": { 00:29:51.174 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:51.174 "host": "nqn.2016-06.io.spdk:host1", 00:29:51.174 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:29:51.174 } 00:29:51.174 }, 00:29:51.174 { 00:29:51.174 "method": "nvmf_subsystem_add_ns", 00:29:51.174 "params": { 00:29:51.174 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:51.174 "namespace": { 00:29:51.174 "nsid": 1, 00:29:51.174 "bdev_name": "malloc0", 00:29:51.174 "nguid": "A44C994E75524FAC880285038D922C5C", 00:29:51.174 "uuid": "a44c994e-7552-4fac-8802-85038d922c5c" 00:29:51.174 } 00:29:51.174 } 00:29:51.174 }, 00:29:51.174 { 00:29:51.174 "method": "nvmf_subsystem_add_listener", 00:29:51.174 "params": { 00:29:51.174 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:51.174 "listen_address": { 00:29:51.174 "trtype": "TCP", 00:29:51.174 "adrfam": "IPv4", 00:29:51.174 "traddr": "10.0.0.2", 00:29:51.174 "trsvcid": "4420" 00:29:51.174 }, 00:29:51.174 "secure_channel": true 00:29:51.174 } 00:29:51.174 } 00:29:51.174 ] 00:29:51.174 } 00:29:51.174 ] 00:29:51.174 }' 00:29:51.174 08:26:24 -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:29:51.174 08:26:24 -- target/tls.sh@206 -- # bdevperfconf='{ 00:29:51.174 "subsystems": [ 00:29:51.174 { 00:29:51.174 "subsystem": "iobuf", 00:29:51.174 "config": [ 00:29:51.174 { 00:29:51.174 "method": "iobuf_set_options", 00:29:51.174 "params": { 00:29:51.174 "small_pool_count": 8192, 00:29:51.174 "large_pool_count": 1024, 00:29:51.174 "small_bufsize": 8192, 00:29:51.174 "large_bufsize": 135168 00:29:51.174 } 00:29:51.174 } 00:29:51.174 ] 00:29:51.174 }, 00:29:51.174 { 00:29:51.174 "subsystem": "sock", 00:29:51.174 "config": [ 00:29:51.174 { 00:29:51.174 "method": "sock_impl_set_options", 00:29:51.174 "params": { 00:29:51.174 "impl_name": "uring", 00:29:51.174 "recv_buf_size": 2097152, 00:29:51.174 "send_buf_size": 2097152, 00:29:51.174 "enable_recv_pipe": true, 00:29:51.174 "enable_quickack": false, 00:29:51.174 "enable_placement_id": 0, 00:29:51.174 "enable_zerocopy_send_server": false, 00:29:51.174 "enable_zerocopy_send_client": false, 00:29:51.174 "zerocopy_threshold": 0, 00:29:51.174 "tls_version": 0, 00:29:51.174 "enable_ktls": false 00:29:51.174 } 00:29:51.174 }, 00:29:51.174 { 00:29:51.174 "method": "sock_impl_set_options", 00:29:51.174 "params": { 00:29:51.174 "impl_name": "posix", 00:29:51.174 "recv_buf_size": 2097152, 00:29:51.174 "send_buf_size": 2097152, 00:29:51.174 "enable_recv_pipe": true, 00:29:51.174 "enable_quickack": false, 00:29:51.174 "enable_placement_id": 0, 00:29:51.174 "enable_zerocopy_send_server": true, 00:29:51.174 "enable_zerocopy_send_client": false, 00:29:51.174 "zerocopy_threshold": 0, 00:29:51.174 "tls_version": 0, 00:29:51.174 "enable_ktls": false 00:29:51.174 } 00:29:51.174 }, 00:29:51.174 { 00:29:51.174 "method": "sock_impl_set_options", 00:29:51.174 "params": { 00:29:51.174 "impl_name": "ssl", 00:29:51.174 "recv_buf_size": 4096, 00:29:51.174 "send_buf_size": 4096, 00:29:51.174 "enable_recv_pipe": true, 00:29:51.174 "enable_quickack": false, 00:29:51.174 "enable_placement_id": 0, 00:29:51.174 "enable_zerocopy_send_server": true, 00:29:51.174 "enable_zerocopy_send_client": false, 00:29:51.174 "zerocopy_threshold": 0, 00:29:51.174 "tls_version": 0, 00:29:51.174 "enable_ktls": false 00:29:51.174 } 00:29:51.174 } 00:29:51.174 ] 00:29:51.174 }, 00:29:51.174 { 00:29:51.174 "subsystem": "vmd", 00:29:51.174 "config": [] 00:29:51.174 }, 00:29:51.174 { 00:29:51.174 "subsystem": "accel", 00:29:51.174 "config": [ 00:29:51.174 { 00:29:51.174 "method": "accel_set_options", 00:29:51.174 "params": { 00:29:51.174 "small_cache_size": 128, 00:29:51.174 "large_cache_size": 16, 00:29:51.174 "task_count": 2048, 00:29:51.174 "sequence_count": 2048, 00:29:51.174 "buf_count": 2048 00:29:51.174 } 00:29:51.174 } 00:29:51.174 ] 00:29:51.174 }, 00:29:51.174 { 00:29:51.174 "subsystem": "bdev", 00:29:51.174 "config": [ 00:29:51.174 { 00:29:51.174 "method": "bdev_set_options", 00:29:51.174 "params": { 00:29:51.174 "bdev_io_pool_size": 65535, 00:29:51.174 "bdev_io_cache_size": 256, 00:29:51.174 "bdev_auto_examine": true, 00:29:51.174 "iobuf_small_cache_size": 128, 00:29:51.174 "iobuf_large_cache_size": 16 00:29:51.174 } 00:29:51.174 }, 00:29:51.174 { 00:29:51.174 "method": "bdev_raid_set_options", 00:29:51.174 "params": { 00:29:51.174 "process_window_size_kb": 1024 00:29:51.174 } 00:29:51.174 }, 00:29:51.174 { 00:29:51.174 "method": "bdev_iscsi_set_options", 00:29:51.174 "params": { 00:29:51.174 "timeout_sec": 30 00:29:51.174 } 00:29:51.174 }, 00:29:51.174 { 00:29:51.174 "method": "bdev_nvme_set_options", 00:29:51.174 "params": { 00:29:51.174 "action_on_timeout": "none", 00:29:51.174 "timeout_us": 0, 00:29:51.174 "timeout_admin_us": 0, 00:29:51.174 "keep_alive_timeout_ms": 10000, 00:29:51.174 "transport_retry_count": 4, 00:29:51.174 "arbitration_burst": 0, 00:29:51.174 "low_priority_weight": 0, 00:29:51.174 "medium_priority_weight": 0, 00:29:51.174 "high_priority_weight": 0, 00:29:51.174 "nvme_adminq_poll_period_us": 10000, 00:29:51.174 "nvme_ioq_poll_period_us": 0, 00:29:51.174 "io_queue_requests": 512, 00:29:51.174 "delay_cmd_submit": true, 00:29:51.174 "bdev_retry_count": 3, 00:29:51.174 "transport_ack_timeout": 0, 00:29:51.174 "ctrlr_loss_timeout_sec": 0, 00:29:51.174 "reconnect_delay_sec": 0, 00:29:51.174 "fast_io_fail_timeout_sec": 0, 00:29:51.174 "generate_uuids": false, 00:29:51.174 "transport_tos": 0, 00:29:51.174 "io_path_stat": false, 00:29:51.174 "allow_accel_sequence": false 00:29:51.174 } 00:29:51.174 }, 00:29:51.174 { 00:29:51.174 "method": "bdev_nvme_attach_controller", 00:29:51.174 "params": { 00:29:51.174 "name": "TLSTEST", 00:29:51.174 "trtype": "TCP", 00:29:51.174 "adrfam": "IPv4", 00:29:51.174 "traddr": "10.0.0.2", 00:29:51.174 "trsvcid": "4420", 00:29:51.174 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:51.174 "prchk_reftag": false, 00:29:51.174 "prchk_guard": false, 00:29:51.174 "ctrlr_loss_timeout_sec": 0, 00:29:51.174 "reconnect_delay_sec": 0, 00:29:51.174 "fast_io_fail_timeout_sec": 0, 00:29:51.174 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:29:51.174 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:51.174 "hdgst": false, 00:29:51.174 "ddgst": false 00:29:51.174 } 00:29:51.174 }, 00:29:51.174 { 00:29:51.174 "method": "bdev_nvme_set_hotplug", 00:29:51.174 "params": { 00:29:51.174 "period_us": 100000, 00:29:51.174 "enable": false 00:29:51.174 } 00:29:51.174 }, 00:29:51.174 { 00:29:51.174 "method": "bdev_wait_for_examine" 00:29:51.174 } 00:29:51.174 ] 00:29:51.174 }, 00:29:51.174 { 00:29:51.174 "subsystem": "nbd", 00:29:51.174 "config": [] 00:29:51.174 } 00:29:51.174 ] 00:29:51.174 }' 00:29:51.174 08:26:24 -- target/tls.sh@208 -- # killprocess 65539 00:29:51.174 08:26:24 -- common/autotest_common.sh@926 -- # '[' -z 65539 ']' 00:29:51.174 08:26:24 -- common/autotest_common.sh@930 -- # kill -0 65539 00:29:51.174 08:26:24 -- common/autotest_common.sh@931 -- # uname 00:29:51.174 08:26:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:51.174 08:26:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 65539 00:29:51.174 killing process with pid 65539 00:29:51.174 Received shutdown signal, test time was about 10.000000 seconds 00:29:51.174 00:29:51.174 Latency(us) 00:29:51.174 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:51.174 =================================================================================================================== 00:29:51.174 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:29:51.174 08:26:24 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:29:51.174 08:26:24 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:29:51.174 08:26:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 65539' 00:29:51.174 08:26:24 -- common/autotest_common.sh@945 -- # kill 65539 00:29:51.174 08:26:24 -- common/autotest_common.sh@950 -- # wait 65539 00:29:51.433 08:26:24 -- target/tls.sh@209 -- # killprocess 65490 00:29:51.433 08:26:24 -- common/autotest_common.sh@926 -- # '[' -z 65490 ']' 00:29:51.433 08:26:24 -- common/autotest_common.sh@930 -- # kill -0 65490 00:29:51.433 08:26:24 -- common/autotest_common.sh@931 -- # uname 00:29:51.433 08:26:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:51.433 08:26:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 65490 00:29:51.433 killing process with pid 65490 00:29:51.433 08:26:24 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:51.433 08:26:24 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:51.433 08:26:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 65490' 00:29:51.433 08:26:24 -- common/autotest_common.sh@945 -- # kill 65490 00:29:51.433 08:26:24 -- common/autotest_common.sh@950 -- # wait 65490 00:29:51.692 08:26:25 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:29:51.692 08:26:25 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:51.692 08:26:25 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:51.692 08:26:25 -- target/tls.sh@212 -- # echo '{ 00:29:51.692 "subsystems": [ 00:29:51.692 { 00:29:51.692 "subsystem": "iobuf", 00:29:51.692 "config": [ 00:29:51.692 { 00:29:51.692 "method": "iobuf_set_options", 00:29:51.692 "params": { 00:29:51.692 "small_pool_count": 8192, 00:29:51.692 "large_pool_count": 1024, 00:29:51.692 "small_bufsize": 8192, 00:29:51.692 "large_bufsize": 135168 00:29:51.692 } 00:29:51.692 } 00:29:51.692 ] 00:29:51.692 }, 00:29:51.692 { 00:29:51.692 "subsystem": "sock", 00:29:51.692 "config": [ 00:29:51.692 { 00:29:51.692 "method": "sock_impl_set_options", 00:29:51.692 "params": { 00:29:51.692 "impl_name": "uring", 00:29:51.692 "recv_buf_size": 2097152, 00:29:51.692 "send_buf_size": 2097152, 00:29:51.692 "enable_recv_pipe": true, 00:29:51.692 "enable_quickack": false, 00:29:51.692 "enable_placement_id": 0, 00:29:51.692 "enable_zerocopy_send_server": false, 00:29:51.692 "enable_zerocopy_send_client": false, 00:29:51.692 "zerocopy_threshold": 0, 00:29:51.692 "tls_version": 0, 00:29:51.692 "enable_ktls": false 00:29:51.692 } 00:29:51.692 }, 00:29:51.692 { 00:29:51.692 "method": "sock_impl_set_options", 00:29:51.692 "params": { 00:29:51.692 "impl_name": "posix", 00:29:51.692 "recv_buf_size": 2097152, 00:29:51.692 "send_buf_size": 2097152, 00:29:51.692 "enable_recv_pipe": true, 00:29:51.692 "enable_quickack": false, 00:29:51.692 "enable_placement_id": 0, 00:29:51.692 "enable_zerocopy_send_server": true, 00:29:51.692 "enable_zerocopy_send_client": false, 00:29:51.692 "zerocopy_threshold": 0, 00:29:51.692 "tls_version": 0, 00:29:51.692 "enable_ktls": false 00:29:51.692 } 00:29:51.692 }, 00:29:51.692 { 00:29:51.692 "method": "sock_impl_set_options", 00:29:51.692 "params": { 00:29:51.692 "impl_name": "ssl", 00:29:51.692 "recv_buf_size": 4096, 00:29:51.692 "send_buf_size": 4096, 00:29:51.692 "enable_recv_pipe": true, 00:29:51.692 "enable_quickack": false, 00:29:51.692 "enable_placement_id": 0, 00:29:51.692 "enable_zerocopy_send_server": true, 00:29:51.692 "enable_zerocopy_send_client": false, 00:29:51.692 "zerocopy_threshold": 0, 00:29:51.692 "tls_version": 0, 00:29:51.692 "enable_ktls": false 00:29:51.692 } 00:29:51.692 } 00:29:51.692 ] 00:29:51.692 }, 00:29:51.692 { 00:29:51.692 "subsystem": "vmd", 00:29:51.692 "config": [] 00:29:51.692 }, 00:29:51.692 { 00:29:51.692 "subsystem": "accel", 00:29:51.692 "config": [ 00:29:51.692 { 00:29:51.692 "method": "accel_set_options", 00:29:51.692 "params": { 00:29:51.692 "small_cache_size": 128, 00:29:51.692 "large_cache_size": 16, 00:29:51.692 "task_count": 2048, 00:29:51.692 "sequence_count": 2048, 00:29:51.692 "buf_count": 2048 00:29:51.692 } 00:29:51.692 } 00:29:51.692 ] 00:29:51.692 }, 00:29:51.692 { 00:29:51.692 "subsystem": "bdev", 00:29:51.692 "config": [ 00:29:51.692 { 00:29:51.692 "method": "bdev_set_options", 00:29:51.692 "params": { 00:29:51.692 "bdev_io_pool_size": 65535, 00:29:51.692 "bdev_io_cache_size": 256, 00:29:51.692 "bdev_auto_examine": true, 00:29:51.692 "iobuf_small_cache_size": 128, 00:29:51.692 "iobuf_large_cache_size": 16 00:29:51.692 } 00:29:51.692 }, 00:29:51.692 { 00:29:51.692 "method": "bdev_raid_set_options", 00:29:51.692 "params": { 00:29:51.692 "process_window_size_kb": 1024 00:29:51.692 } 00:29:51.692 }, 00:29:51.692 { 00:29:51.692 "method": "bdev_iscsi_set_options", 00:29:51.692 "params": { 00:29:51.692 "timeout_sec": 30 00:29:51.692 } 00:29:51.692 }, 00:29:51.692 { 00:29:51.692 "method": "bdev_nvme_set_options", 00:29:51.692 "params": { 00:29:51.692 "action_on_timeout": "none", 00:29:51.692 "timeout_us": 0, 00:29:51.692 "timeout_admin_us": 0, 00:29:51.692 "keep_alive_timeout_ms": 10000, 00:29:51.692 "transport_retry_count": 4, 00:29:51.692 "arbitration_burst": 0, 00:29:51.692 "low_priority_weight": 0, 00:29:51.692 "medium_priority_weight": 0, 00:29:51.692 "high_priority_weight": 0, 00:29:51.692 "nvme_adminq_poll_period_us": 10000, 00:29:51.692 "nvme_ioq_poll_period_us": 0, 00:29:51.692 "io_queue_requests": 0, 00:29:51.692 "delay_cmd_submit": true, 00:29:51.692 "bdev_retry_count": 3, 00:29:51.692 "transport_ack_timeout": 0, 00:29:51.692 "ctrlr_loss_timeout_sec": 0, 00:29:51.692 "reconnect_delay_sec": 0, 00:29:51.692 "fast_io_fail_timeout_sec": 0, 00:29:51.692 "generate_uuids": false, 00:29:51.692 "transport_tos": 0, 00:29:51.692 "io_path_stat": false, 00:29:51.692 "allow_accel_sequence": false 00:29:51.692 } 00:29:51.692 }, 00:29:51.692 { 00:29:51.692 "method": "bdev_nvme_set_hotplug", 00:29:51.692 "params": { 00:29:51.692 "period_us": 100000, 00:29:51.692 "enable": false 00:29:51.692 } 00:29:51.692 }, 00:29:51.692 { 00:29:51.692 "method": "bdev_malloc_create", 00:29:51.692 "params": { 00:29:51.692 "name": "malloc0", 00:29:51.692 "num_blocks": 8192, 00:29:51.692 "block_size": 4096, 00:29:51.692 "physical_block_size": 4096, 00:29:51.692 "uuid": "a44c994e-7552-4fac-8802-85038d922c5c", 00:29:51.692 "optimal_io_boundary": 0 00:29:51.692 } 00:29:51.692 }, 00:29:51.692 { 00:29:51.692 "method": "bdev_wait_for_examine" 00:29:51.692 } 00:29:51.692 ] 00:29:51.692 }, 00:29:51.692 { 00:29:51.692 "subsystem": "nbd", 00:29:51.692 "config": [] 00:29:51.692 }, 00:29:51.692 { 00:29:51.692 "subsystem": "scheduler", 00:29:51.692 "config": [ 00:29:51.692 { 00:29:51.692 "method": "framework_set_scheduler", 00:29:51.692 "params": { 00:29:51.692 "name": "static" 00:29:51.692 } 00:29:51.692 } 00:29:51.692 ] 00:29:51.692 }, 00:29:51.692 { 00:29:51.692 "subsystem": "nvmf", 00:29:51.692 "config": [ 00:29:51.692 { 00:29:51.692 "method": "nvmf_set_config", 00:29:51.692 "params": { 00:29:51.692 "discovery_filter": "match_any", 00:29:51.692 "admin_cmd_passthru": { 00:29:51.692 "identify_ctrlr": false 00:29:51.692 } 00:29:51.692 } 00:29:51.692 }, 00:29:51.692 { 00:29:51.692 "method": "nvmf_set_max_subsystems", 00:29:51.692 "params": { 00:29:51.692 "max_subsystems": 1024 00:29:51.692 } 00:29:51.692 }, 00:29:51.692 { 00:29:51.692 "method": "nvmf_set_crdt", 00:29:51.692 "params": { 00:29:51.692 "crdt1": 0, 00:29:51.692 "crdt2": 0, 00:29:51.692 "crdt3": 0 00:29:51.692 } 00:29:51.692 }, 00:29:51.692 { 00:29:51.692 "method": "nvmf_create_transport", 00:29:51.692 "params": { 00:29:51.692 "trtype": "TCP", 00:29:51.692 "max_queue_depth": 128, 00:29:51.692 "max_io_qpairs_per_ctrlr": 127, 00:29:51.692 "in_capsule_data_size": 4096, 00:29:51.692 "max_io_size": 131072, 00:29:51.692 "io_unit_size": 131072, 00:29:51.692 "max_aq_depth": 128, 00:29:51.692 "num_shared_buffers": 511, 00:29:51.692 "buf_cache_size": 4294967295, 00:29:51.692 "dif_insert_or_strip": false, 00:29:51.692 "zcopy": false, 00:29:51.692 "c2h_success": false, 00:29:51.692 "sock_priority": 0, 00:29:51.692 "abort_timeout_sec": 1 00:29:51.692 } 00:29:51.692 }, 00:29:51.692 { 00:29:51.692 "method": "nvmf_create_subsystem", 00:29:51.692 "params": { 00:29:51.692 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:51.692 "allow_any_host": false, 00:29:51.692 "serial_number": "SPDK00000000000001", 00:29:51.692 "model_number": "SPDK bdev Controller", 00:29:51.692 "max_namespaces": 10, 00:29:51.692 "min_cntlid": 1, 00:29:51.692 "max_cntlid": 65519, 00:29:51.693 "ana_reporting": false 00:29:51.693 } 00:29:51.693 }, 00:29:51.693 { 00:29:51.693 "method": "nvmf_subsystem_add_host", 00:29:51.693 "params": { 00:29:51.693 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:51.693 "host": "nqn.2016-06.io.spdk:host1", 00:29:51.693 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:29:51.693 } 00:29:51.693 }, 00:29:51.693 { 00:29:51.693 "method": "nvmf_subsystem_add_ns", 00:29:51.693 "params": { 00:29:51.693 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:51.693 "namespace": { 00:29:51.693 "nsid": 1, 00:29:51.693 "bdev_name": "malloc0", 00:29:51.693 "nguid": "A44C994E75524FAC880285038D922C5C", 00:29:51.693 "uuid": "a44c994e-7552-4fac-8802-85038d922c5c" 00:29:51.693 } 00:29:51.693 } 00:29:51.693 }, 00:29:51.693 { 00:29:51.693 "method": "nvmf_subsystem_add_listener", 00:29:51.693 "params": { 00:29:51.693 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:51.693 "listen_address": { 00:29:51.693 "trtype": "TCP", 00:29:51.693 "adrfam": "IPv4", 00:29:51.693 "traddr": "10.0.0.2", 00:29:51.693 "trsvcid": "4420" 00:29:51.693 }, 00:29:51.693 "secure_channel": true 00:29:51.693 } 00:29:51.693 } 00:29:51.693 ] 00:29:51.693 } 00:29:51.693 ] 00:29:51.693 }' 00:29:51.693 08:26:25 -- common/autotest_common.sh@10 -- # set +x 00:29:51.693 08:26:25 -- nvmf/common.sh@469 -- # nvmfpid=65588 00:29:51.693 08:26:25 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:29:51.693 08:26:25 -- nvmf/common.sh@470 -- # waitforlisten 65588 00:29:51.693 08:26:25 -- common/autotest_common.sh@819 -- # '[' -z 65588 ']' 00:29:51.693 08:26:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:51.693 08:26:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:51.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:51.693 08:26:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:51.693 08:26:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:51.693 08:26:25 -- common/autotest_common.sh@10 -- # set +x 00:29:51.952 [2024-04-17 08:26:25.065659] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:29:51.952 [2024-04-17 08:26:25.065749] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:51.952 [2024-04-17 08:26:25.206519] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:52.212 [2024-04-17 08:26:25.306657] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:52.212 [2024-04-17 08:26:25.306797] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:52.212 [2024-04-17 08:26:25.306805] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:52.212 [2024-04-17 08:26:25.306812] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:52.212 [2024-04-17 08:26:25.306840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:52.212 [2024-04-17 08:26:25.515181] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:52.470 [2024-04-17 08:26:25.547091] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:52.470 [2024-04-17 08:26:25.547264] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:52.730 08:26:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:52.731 08:26:25 -- common/autotest_common.sh@852 -- # return 0 00:29:52.731 08:26:25 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:52.731 08:26:25 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:52.731 08:26:25 -- common/autotest_common.sh@10 -- # set +x 00:29:52.731 08:26:25 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:52.731 08:26:25 -- target/tls.sh@216 -- # bdevperf_pid=65620 00:29:52.731 08:26:25 -- target/tls.sh@217 -- # waitforlisten 65620 /var/tmp/bdevperf.sock 00:29:52.731 08:26:25 -- common/autotest_common.sh@819 -- # '[' -z 65620 ']' 00:29:52.731 08:26:25 -- target/tls.sh@213 -- # echo '{ 00:29:52.731 "subsystems": [ 00:29:52.731 { 00:29:52.731 "subsystem": "iobuf", 00:29:52.731 "config": [ 00:29:52.731 { 00:29:52.731 "method": "iobuf_set_options", 00:29:52.731 "params": { 00:29:52.731 "small_pool_count": 8192, 00:29:52.731 "large_pool_count": 1024, 00:29:52.731 "small_bufsize": 8192, 00:29:52.731 "large_bufsize": 135168 00:29:52.731 } 00:29:52.731 } 00:29:52.731 ] 00:29:52.731 }, 00:29:52.731 { 00:29:52.731 "subsystem": "sock", 00:29:52.731 "config": [ 00:29:52.731 { 00:29:52.731 "method": "sock_impl_set_options", 00:29:52.731 "params": { 00:29:52.731 "impl_name": "uring", 00:29:52.731 "recv_buf_size": 2097152, 00:29:52.731 "send_buf_size": 2097152, 00:29:52.731 "enable_recv_pipe": true, 00:29:52.731 "enable_quickack": false, 00:29:52.731 "enable_placement_id": 0, 00:29:52.731 "enable_zerocopy_send_server": false, 00:29:52.731 "enable_zerocopy_send_client": false, 00:29:52.731 "zerocopy_threshold": 0, 00:29:52.731 "tls_version": 0, 00:29:52.731 "enable_ktls": false 00:29:52.731 } 00:29:52.731 }, 00:29:52.731 { 00:29:52.731 "method": "sock_impl_set_options", 00:29:52.731 "params": { 00:29:52.731 "impl_name": "posix", 00:29:52.731 "recv_buf_size": 2097152, 00:29:52.731 "send_buf_size": 2097152, 00:29:52.731 "enable_recv_pipe": true, 00:29:52.731 "enable_quickack": false, 00:29:52.731 "enable_placement_id": 0, 00:29:52.731 "enable_zerocopy_send_server": true, 00:29:52.731 "enable_zerocopy_send_client": false, 00:29:52.731 "zerocopy_threshold": 0, 00:29:52.731 "tls_version": 0, 00:29:52.731 "enable_ktls": false 00:29:52.731 } 00:29:52.731 }, 00:29:52.731 { 00:29:52.731 "method": "sock_impl_set_options", 00:29:52.731 "params": { 00:29:52.731 "impl_name": "ssl", 00:29:52.731 "recv_buf_size": 4096, 00:29:52.731 "send_buf_size": 4096, 00:29:52.731 "enable_recv_pipe": true, 00:29:52.731 "enable_quickack": false, 00:29:52.731 "enable_placement_id": 0, 00:29:52.731 "enable_zerocopy_send_server": true, 00:29:52.731 "enable_zerocopy_send_client": false, 00:29:52.731 "zerocopy_threshold": 0, 00:29:52.731 "tls_version": 0, 00:29:52.731 "enable_ktls": false 00:29:52.731 } 00:29:52.731 } 00:29:52.731 ] 00:29:52.731 }, 00:29:52.731 { 00:29:52.731 "subsystem": "vmd", 00:29:52.731 "config": [] 00:29:52.731 }, 00:29:52.731 { 00:29:52.731 "subsystem": "accel", 00:29:52.731 "config": [ 00:29:52.731 { 00:29:52.731 "method": "accel_set_options", 00:29:52.731 "params": { 00:29:52.731 "small_cache_size": 128, 00:29:52.731 "large_cache_size": 16, 00:29:52.731 "task_count": 2048, 00:29:52.731 "sequence_count": 2048, 00:29:52.731 "buf_count": 2048 00:29:52.731 } 00:29:52.731 } 00:29:52.731 ] 00:29:52.731 }, 00:29:52.731 { 00:29:52.731 "subsystem": "bdev", 00:29:52.731 "config": [ 00:29:52.731 { 00:29:52.731 "method": "bdev_set_options", 00:29:52.731 "params": { 00:29:52.731 "bdev_io_pool_size": 65535, 00:29:52.731 "bdev_io_cache_size": 256, 00:29:52.731 "bdev_auto_examine": true, 00:29:52.731 "iobuf_small_cache_size": 128, 00:29:52.731 "iobuf_large_cache_size": 16 00:29:52.731 } 00:29:52.731 }, 00:29:52.731 { 00:29:52.731 "method": "bdev_raid_set_options", 00:29:52.731 "params": { 00:29:52.731 "process_window_size_kb": 1024 00:29:52.731 } 00:29:52.731 }, 00:29:52.731 { 00:29:52.731 "method": "bdev_iscsi_set_options", 00:29:52.731 "params": { 00:29:52.731 "timeout_sec": 30 00:29:52.731 } 00:29:52.731 }, 00:29:52.731 { 00:29:52.731 "method": "bdev_nvme_set_options", 00:29:52.731 "params": { 00:29:52.731 "action_on_timeout": "none", 00:29:52.731 "timeout_us": 0, 00:29:52.731 "timeout_admin_us": 0, 00:29:52.731 "keep_alive_timeout_ms": 10000, 00:29:52.731 "transport_retry_count": 4, 00:29:52.731 "arbitration_burst": 0, 00:29:52.731 "low_priority_weight": 0, 00:29:52.731 "medium_priority_weight": 0, 00:29:52.731 "high_priority_weight": 0, 00:29:52.731 "nvme_adminq_poll_period_us": 10000, 00:29:52.731 "nvme_ioq_poll_period_us": 0, 00:29:52.731 "io_queue_requests": 512, 00:29:52.731 "delay_cmd_submit": true, 00:29:52.731 "bdev_retry_count": 3, 00:29:52.731 "transport_ack_timeout": 0, 00:29:52.731 "ctrlr_loss_timeout_sec": 0, 00:29:52.731 "reconnect_delay_sec": 0, 00:29:52.731 "fast_io_fail_timeout_sec": 0, 00:29:52.731 "generate_uuids": false, 00:29:52.731 "transport_tos": 0, 00:29:52.731 "io_path_stat": false, 00:29:52.731 "allow_accel_sequence": false 00:29:52.731 } 00:29:52.731 }, 00:29:52.731 { 00:29:52.731 "method": "bdev_nvme_attach_controller", 00:29:52.731 "params": { 00:29:52.731 "name": "TLSTEST", 00:29:52.731 "trtype": "TCP", 00:29:52.731 "adrfam": "IPv4", 00:29:52.731 "traddr": "10.0.0.2", 00:29:52.731 "trsvcid": "4420", 00:29:52.731 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:52.731 "prchk_reftag": false, 00:29:52.731 "prchk_guard": false, 00:29:52.731 "ctrlr_loss_timeout_sec": 0, 00:29:52.731 "reconnect_delay_sec": 0, 00:29:52.731 "fast_io_fail_timeout_sec": 0, 00:29:52.731 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:29:52.731 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:52.731 "hdgst": false, 00:29:52.731 "ddgst": false 00:29:52.731 } 00:29:52.731 }, 00:29:52.731 { 00:29:52.731 "method": "bdev_nvme_set_hotplug", 00:29:52.731 "params": { 00:29:52.731 "period_us": 100000, 00:29:52.731 "enable": false 00:29:52.731 } 00:29:52.731 }, 00:29:52.731 { 00:29:52.731 "method": "bdev_wait_for_examine" 00:29:52.731 } 00:29:52.731 ] 00:29:52.731 }, 00:29:52.731 { 00:29:52.731 "subsystem": "nbd", 00:29:52.731 "config": [] 00:29:52.731 } 00:29:52.731 ] 00:29:52.731 }' 00:29:52.731 08:26:25 -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:29:52.731 08:26:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:52.731 08:26:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:52.731 08:26:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:52.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:52.731 08:26:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:52.731 08:26:25 -- common/autotest_common.sh@10 -- # set +x 00:29:52.731 [2024-04-17 08:26:26.042495] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:29:52.731 [2024-04-17 08:26:26.042583] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65620 ] 00:29:52.991 [2024-04-17 08:26:26.166490] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:52.991 [2024-04-17 08:26:26.270548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:53.250 [2024-04-17 08:26:26.419499] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:53.817 08:26:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:53.817 08:26:26 -- common/autotest_common.sh@852 -- # return 0 00:29:53.817 08:26:26 -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:29:53.817 Running I/O for 10 seconds... 00:30:03.801 00:30:03.801 Latency(us) 00:30:03.801 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:03.801 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:30:03.801 Verification LBA range: start 0x0 length 0x2000 00:30:03.801 TLSTESTn1 : 10.01 7229.12 28.24 0.00 0.00 17679.65 3834.86 18086.79 00:30:03.801 =================================================================================================================== 00:30:03.801 Total : 7229.12 28.24 0.00 0.00 17679.65 3834.86 18086.79 00:30:03.801 0 00:30:03.801 08:26:37 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:03.801 08:26:37 -- target/tls.sh@223 -- # killprocess 65620 00:30:03.801 08:26:37 -- common/autotest_common.sh@926 -- # '[' -z 65620 ']' 00:30:03.801 08:26:37 -- common/autotest_common.sh@930 -- # kill -0 65620 00:30:03.801 08:26:37 -- common/autotest_common.sh@931 -- # uname 00:30:03.801 08:26:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:03.801 08:26:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 65620 00:30:03.801 killing process with pid 65620 00:30:03.801 Received shutdown signal, test time was about 10.000000 seconds 00:30:03.801 00:30:03.801 Latency(us) 00:30:03.801 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:03.801 =================================================================================================================== 00:30:03.801 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:03.801 08:26:37 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:30:03.801 08:26:37 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:30:03.801 08:26:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 65620' 00:30:03.801 08:26:37 -- common/autotest_common.sh@945 -- # kill 65620 00:30:03.801 08:26:37 -- common/autotest_common.sh@950 -- # wait 65620 00:30:04.060 08:26:37 -- target/tls.sh@224 -- # killprocess 65588 00:30:04.060 08:26:37 -- common/autotest_common.sh@926 -- # '[' -z 65588 ']' 00:30:04.060 08:26:37 -- common/autotest_common.sh@930 -- # kill -0 65588 00:30:04.060 08:26:37 -- common/autotest_common.sh@931 -- # uname 00:30:04.060 08:26:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:04.060 08:26:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 65588 00:30:04.060 killing process with pid 65588 00:30:04.060 08:26:37 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:30:04.060 08:26:37 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:30:04.060 08:26:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 65588' 00:30:04.060 08:26:37 -- common/autotest_common.sh@945 -- # kill 65588 00:30:04.060 08:26:37 -- common/autotest_common.sh@950 -- # wait 65588 00:30:04.318 08:26:37 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:30:04.318 08:26:37 -- target/tls.sh@227 -- # cleanup 00:30:04.318 08:26:37 -- target/tls.sh@15 -- # process_shm --id 0 00:30:04.318 08:26:37 -- common/autotest_common.sh@796 -- # type=--id 00:30:04.318 08:26:37 -- common/autotest_common.sh@797 -- # id=0 00:30:04.318 08:26:37 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:30:04.318 08:26:37 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:30:04.318 08:26:37 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:30:04.318 08:26:37 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:30:04.318 08:26:37 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:30:04.318 08:26:37 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:30:04.318 nvmf_trace.0 00:30:04.577 08:26:37 -- common/autotest_common.sh@811 -- # return 0 00:30:04.577 08:26:37 -- target/tls.sh@16 -- # killprocess 65620 00:30:04.577 08:26:37 -- common/autotest_common.sh@926 -- # '[' -z 65620 ']' 00:30:04.577 08:26:37 -- common/autotest_common.sh@930 -- # kill -0 65620 00:30:04.577 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (65620) - No such process 00:30:04.577 Process with pid 65620 is not found 00:30:04.577 08:26:37 -- common/autotest_common.sh@953 -- # echo 'Process with pid 65620 is not found' 00:30:04.577 08:26:37 -- target/tls.sh@17 -- # nvmftestfini 00:30:04.577 08:26:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:30:04.577 08:26:37 -- nvmf/common.sh@116 -- # sync 00:30:04.577 08:26:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:30:04.577 08:26:37 -- nvmf/common.sh@119 -- # set +e 00:30:04.577 08:26:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:30:04.577 08:26:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:30:04.577 rmmod nvme_tcp 00:30:04.577 rmmod nvme_fabrics 00:30:04.577 rmmod nvme_keyring 00:30:04.577 08:26:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:30:04.577 08:26:37 -- nvmf/common.sh@123 -- # set -e 00:30:04.577 08:26:37 -- nvmf/common.sh@124 -- # return 0 00:30:04.577 08:26:37 -- nvmf/common.sh@477 -- # '[' -n 65588 ']' 00:30:04.577 08:26:37 -- nvmf/common.sh@478 -- # killprocess 65588 00:30:04.577 08:26:37 -- common/autotest_common.sh@926 -- # '[' -z 65588 ']' 00:30:04.577 08:26:37 -- common/autotest_common.sh@930 -- # kill -0 65588 00:30:04.577 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (65588) - No such process 00:30:04.577 Process with pid 65588 is not found 00:30:04.577 08:26:37 -- common/autotest_common.sh@953 -- # echo 'Process with pid 65588 is not found' 00:30:04.577 08:26:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:30:04.577 08:26:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:30:04.577 08:26:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:30:04.577 08:26:37 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:04.577 08:26:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:30:04.577 08:26:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:04.577 08:26:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:04.577 08:26:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:04.577 08:26:37 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:30:04.577 08:26:37 -- target/tls.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:30:04.577 00:30:04.577 real 1m8.886s 00:30:04.577 user 1m46.052s 00:30:04.577 sys 0m22.838s 00:30:04.578 08:26:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:04.578 08:26:37 -- common/autotest_common.sh@10 -- # set +x 00:30:04.578 ************************************ 00:30:04.578 END TEST nvmf_tls 00:30:04.578 ************************************ 00:30:04.578 08:26:37 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:30:04.578 08:26:37 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:30:04.578 08:26:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:04.578 08:26:37 -- common/autotest_common.sh@10 -- # set +x 00:30:04.578 ************************************ 00:30:04.578 START TEST nvmf_fips 00:30:04.578 ************************************ 00:30:04.578 08:26:37 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:30:04.837 * Looking for test storage... 00:30:04.837 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:30:04.837 08:26:37 -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:04.837 08:26:37 -- nvmf/common.sh@7 -- # uname -s 00:30:04.837 08:26:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:04.837 08:26:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:04.837 08:26:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:04.837 08:26:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:04.837 08:26:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:04.837 08:26:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:04.837 08:26:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:04.837 08:26:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:04.837 08:26:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:04.837 08:26:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:04.837 08:26:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ce38300-f67f-48af-81f9-d51a7c54746d 00:30:04.837 08:26:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ce38300-f67f-48af-81f9-d51a7c54746d 00:30:04.837 08:26:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:04.837 08:26:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:04.837 08:26:37 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:30:04.837 08:26:37 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:04.837 08:26:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:04.837 08:26:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:04.837 08:26:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:04.837 08:26:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:04.837 08:26:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:04.837 08:26:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:04.837 08:26:37 -- paths/export.sh@5 -- # export PATH 00:30:04.837 08:26:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:04.837 08:26:37 -- nvmf/common.sh@46 -- # : 0 00:30:04.837 08:26:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:04.837 08:26:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:04.837 08:26:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:04.837 08:26:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:04.837 08:26:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:04.837 08:26:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:04.837 08:26:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:04.837 08:26:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:04.837 08:26:37 -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:04.837 08:26:37 -- fips/fips.sh@89 -- # check_openssl_version 00:30:04.837 08:26:37 -- fips/fips.sh@83 -- # local target=3.0.0 00:30:04.837 08:26:37 -- fips/fips.sh@85 -- # openssl version 00:30:04.837 08:26:37 -- fips/fips.sh@85 -- # awk '{print $2}' 00:30:04.837 08:26:37 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:30:04.837 08:26:37 -- scripts/common.sh@375 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:30:04.837 08:26:37 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:30:04.837 08:26:37 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:30:04.837 08:26:37 -- scripts/common.sh@335 -- # IFS=.-: 00:30:04.837 08:26:37 -- scripts/common.sh@335 -- # read -ra ver1 00:30:04.837 08:26:37 -- scripts/common.sh@336 -- # IFS=.-: 00:30:04.837 08:26:37 -- scripts/common.sh@336 -- # read -ra ver2 00:30:04.837 08:26:37 -- scripts/common.sh@337 -- # local 'op=>=' 00:30:04.837 08:26:37 -- scripts/common.sh@339 -- # ver1_l=3 00:30:04.837 08:26:37 -- scripts/common.sh@340 -- # ver2_l=3 00:30:04.837 08:26:37 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:30:04.837 08:26:37 -- scripts/common.sh@343 -- # case "$op" in 00:30:04.837 08:26:37 -- scripts/common.sh@347 -- # : 1 00:30:04.837 08:26:37 -- scripts/common.sh@363 -- # (( v = 0 )) 00:30:04.837 08:26:37 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:04.837 08:26:37 -- scripts/common.sh@364 -- # decimal 3 00:30:04.837 08:26:37 -- scripts/common.sh@352 -- # local d=3 00:30:04.837 08:26:37 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:30:04.837 08:26:37 -- scripts/common.sh@354 -- # echo 3 00:30:04.837 08:26:37 -- scripts/common.sh@364 -- # ver1[v]=3 00:30:04.837 08:26:37 -- scripts/common.sh@365 -- # decimal 3 00:30:04.837 08:26:37 -- scripts/common.sh@352 -- # local d=3 00:30:04.837 08:26:37 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:30:04.837 08:26:37 -- scripts/common.sh@354 -- # echo 3 00:30:04.837 08:26:37 -- scripts/common.sh@365 -- # ver2[v]=3 00:30:04.837 08:26:37 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:30:04.837 08:26:37 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:30:04.837 08:26:37 -- scripts/common.sh@363 -- # (( v++ )) 00:30:04.837 08:26:37 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:04.837 08:26:37 -- scripts/common.sh@364 -- # decimal 0 00:30:04.837 08:26:37 -- scripts/common.sh@352 -- # local d=0 00:30:04.837 08:26:37 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:30:04.837 08:26:37 -- scripts/common.sh@354 -- # echo 0 00:30:04.837 08:26:37 -- scripts/common.sh@364 -- # ver1[v]=0 00:30:04.837 08:26:37 -- scripts/common.sh@365 -- # decimal 0 00:30:04.837 08:26:37 -- scripts/common.sh@352 -- # local d=0 00:30:04.837 08:26:37 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:30:04.837 08:26:37 -- scripts/common.sh@354 -- # echo 0 00:30:04.837 08:26:37 -- scripts/common.sh@365 -- # ver2[v]=0 00:30:04.837 08:26:37 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:30:04.837 08:26:37 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:30:04.837 08:26:37 -- scripts/common.sh@363 -- # (( v++ )) 00:30:04.837 08:26:37 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:04.837 08:26:37 -- scripts/common.sh@364 -- # decimal 9 00:30:04.837 08:26:37 -- scripts/common.sh@352 -- # local d=9 00:30:04.837 08:26:37 -- scripts/common.sh@353 -- # [[ 9 =~ ^[0-9]+$ ]] 00:30:04.837 08:26:37 -- scripts/common.sh@354 -- # echo 9 00:30:04.837 08:26:38 -- scripts/common.sh@364 -- # ver1[v]=9 00:30:04.837 08:26:38 -- scripts/common.sh@365 -- # decimal 0 00:30:04.837 08:26:38 -- scripts/common.sh@352 -- # local d=0 00:30:04.837 08:26:38 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:30:04.837 08:26:38 -- scripts/common.sh@354 -- # echo 0 00:30:04.837 08:26:38 -- scripts/common.sh@365 -- # ver2[v]=0 00:30:04.837 08:26:38 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:30:04.837 08:26:38 -- scripts/common.sh@366 -- # return 0 00:30:04.837 08:26:38 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:30:04.837 08:26:38 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:30:04.837 08:26:38 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:30:04.837 08:26:38 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:30:04.837 08:26:38 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:30:04.837 08:26:38 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:30:04.837 08:26:38 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:30:04.837 08:26:38 -- fips/fips.sh@105 -- # export OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:30:04.837 08:26:38 -- fips/fips.sh@105 -- # OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:30:04.837 08:26:38 -- fips/fips.sh@114 -- # build_openssl_config 00:30:04.837 08:26:38 -- fips/fips.sh@37 -- # cat 00:30:04.837 08:26:38 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:30:04.837 08:26:38 -- fips/fips.sh@58 -- # cat - 00:30:04.837 08:26:38 -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:30:04.837 08:26:38 -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:30:04.837 08:26:38 -- fips/fips.sh@117 -- # mapfile -t providers 00:30:04.837 08:26:38 -- fips/fips.sh@117 -- # OPENSSL_CONF=spdk_fips.conf 00:30:04.837 08:26:38 -- fips/fips.sh@117 -- # openssl list -providers 00:30:04.837 08:26:38 -- fips/fips.sh@117 -- # grep name 00:30:04.837 08:26:38 -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:30:04.837 08:26:38 -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:30:04.837 08:26:38 -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:30:04.838 08:26:38 -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:30:04.838 08:26:38 -- common/autotest_common.sh@640 -- # local es=0 00:30:04.838 08:26:38 -- common/autotest_common.sh@642 -- # valid_exec_arg openssl md5 /dev/fd/62 00:30:04.838 08:26:38 -- common/autotest_common.sh@628 -- # local arg=openssl 00:30:04.838 08:26:38 -- fips/fips.sh@128 -- # : 00:30:04.838 08:26:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:30:04.838 08:26:38 -- common/autotest_common.sh@632 -- # type -t openssl 00:30:04.838 08:26:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:30:04.838 08:26:38 -- common/autotest_common.sh@634 -- # type -P openssl 00:30:04.838 08:26:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:30:04.838 08:26:38 -- common/autotest_common.sh@634 -- # arg=/usr/bin/openssl 00:30:04.838 08:26:38 -- common/autotest_common.sh@634 -- # [[ -x /usr/bin/openssl ]] 00:30:04.838 08:26:38 -- common/autotest_common.sh@643 -- # openssl md5 /dev/fd/62 00:30:04.838 Error setting digest 00:30:04.838 00621D87EF7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:30:04.838 00621D87EF7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:30:04.838 08:26:38 -- common/autotest_common.sh@643 -- # es=1 00:30:04.838 08:26:38 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:30:04.838 08:26:38 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:30:04.838 08:26:38 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:30:04.838 08:26:38 -- fips/fips.sh@131 -- # nvmftestinit 00:30:04.838 08:26:38 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:30:04.838 08:26:38 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:04.838 08:26:38 -- nvmf/common.sh@436 -- # prepare_net_devs 00:30:04.838 08:26:38 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:30:04.838 08:26:38 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:30:04.838 08:26:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:04.838 08:26:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:04.838 08:26:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:04.838 08:26:38 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:30:04.838 08:26:38 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:30:04.838 08:26:38 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:30:04.838 08:26:38 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:30:04.838 08:26:38 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:30:04.838 08:26:38 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:30:04.838 08:26:38 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:04.838 08:26:38 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:04.838 08:26:38 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:30:04.838 08:26:38 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:30:04.838 08:26:38 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:30:04.838 08:26:38 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:30:04.838 08:26:38 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:30:04.838 08:26:38 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:04.838 08:26:38 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:30:04.838 08:26:38 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:30:04.838 08:26:38 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:30:04.838 08:26:38 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:30:04.838 08:26:38 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:30:04.838 08:26:38 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:30:05.122 Cannot find device "nvmf_tgt_br" 00:30:05.122 08:26:38 -- nvmf/common.sh@154 -- # true 00:30:05.122 08:26:38 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:30:05.122 Cannot find device "nvmf_tgt_br2" 00:30:05.122 08:26:38 -- nvmf/common.sh@155 -- # true 00:30:05.122 08:26:38 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:30:05.122 08:26:38 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:30:05.122 Cannot find device "nvmf_tgt_br" 00:30:05.122 08:26:38 -- nvmf/common.sh@157 -- # true 00:30:05.122 08:26:38 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:30:05.122 Cannot find device "nvmf_tgt_br2" 00:30:05.122 08:26:38 -- nvmf/common.sh@158 -- # true 00:30:05.122 08:26:38 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:30:05.122 08:26:38 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:30:05.122 08:26:38 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:05.122 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:05.122 08:26:38 -- nvmf/common.sh@161 -- # true 00:30:05.122 08:26:38 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:05.122 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:05.122 08:26:38 -- nvmf/common.sh@162 -- # true 00:30:05.122 08:26:38 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:30:05.122 08:26:38 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:30:05.122 08:26:38 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:30:05.122 08:26:38 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:30:05.122 08:26:38 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:30:05.122 08:26:38 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:30:05.122 08:26:38 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:30:05.122 08:26:38 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:30:05.122 08:26:38 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:30:05.122 08:26:38 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:30:05.122 08:26:38 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:30:05.122 08:26:38 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:30:05.122 08:26:38 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:30:05.122 08:26:38 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:30:05.122 08:26:38 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:30:05.122 08:26:38 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:30:05.122 08:26:38 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:30:05.122 08:26:38 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:30:05.122 08:26:38 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:30:05.122 08:26:38 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:30:05.122 08:26:38 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:30:05.380 08:26:38 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:30:05.380 08:26:38 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:30:05.380 08:26:38 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:30:05.380 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:05.380 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.162 ms 00:30:05.380 00:30:05.380 --- 10.0.0.2 ping statistics --- 00:30:05.380 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:05.380 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:30:05.380 08:26:38 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:30:05.380 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:30:05.380 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.150 ms 00:30:05.380 00:30:05.380 --- 10.0.0.3 ping statistics --- 00:30:05.380 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:05.380 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:30:05.380 08:26:38 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:30:05.380 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:05.380 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:30:05.380 00:30:05.380 --- 10.0.0.1 ping statistics --- 00:30:05.380 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:05.380 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:30:05.380 08:26:38 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:05.380 08:26:38 -- nvmf/common.sh@421 -- # return 0 00:30:05.380 08:26:38 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:30:05.380 08:26:38 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:05.380 08:26:38 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:30:05.380 08:26:38 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:30:05.380 08:26:38 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:05.380 08:26:38 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:30:05.380 08:26:38 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:30:05.380 08:26:38 -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:30:05.380 08:26:38 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:30:05.380 08:26:38 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:05.380 08:26:38 -- common/autotest_common.sh@10 -- # set +x 00:30:05.380 08:26:38 -- nvmf/common.sh@469 -- # nvmfpid=65967 00:30:05.380 08:26:38 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:30:05.380 08:26:38 -- nvmf/common.sh@470 -- # waitforlisten 65967 00:30:05.380 08:26:38 -- common/autotest_common.sh@819 -- # '[' -z 65967 ']' 00:30:05.380 08:26:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:05.380 08:26:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:05.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:05.380 08:26:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:05.380 08:26:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:05.380 08:26:38 -- common/autotest_common.sh@10 -- # set +x 00:30:05.380 [2024-04-17 08:26:38.591555] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:30:05.380 [2024-04-17 08:26:38.591615] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:05.639 [2024-04-17 08:26:38.733225] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:05.639 [2024-04-17 08:26:38.834862] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:05.639 [2024-04-17 08:26:38.835009] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:05.639 [2024-04-17 08:26:38.835019] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:05.639 [2024-04-17 08:26:38.835026] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:05.639 [2024-04-17 08:26:38.835051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:06.204 08:26:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:06.204 08:26:39 -- common/autotest_common.sh@852 -- # return 0 00:30:06.204 08:26:39 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:30:06.204 08:26:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:06.204 08:26:39 -- common/autotest_common.sh@10 -- # set +x 00:30:06.204 08:26:39 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:06.204 08:26:39 -- fips/fips.sh@134 -- # trap cleanup EXIT 00:30:06.204 08:26:39 -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:30:06.204 08:26:39 -- fips/fips.sh@138 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:30:06.204 08:26:39 -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:30:06.204 08:26:39 -- fips/fips.sh@140 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:30:06.204 08:26:39 -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:30:06.204 08:26:39 -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:30:06.204 08:26:39 -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:06.461 [2024-04-17 08:26:39.732466] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:06.461 [2024-04-17 08:26:39.748375] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:30:06.462 [2024-04-17 08:26:39.748555] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:06.462 malloc0 00:30:06.720 08:26:39 -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:06.720 08:26:39 -- fips/fips.sh@148 -- # bdevperf_pid=66005 00:30:06.720 08:26:39 -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:30:06.720 08:26:39 -- fips/fips.sh@149 -- # waitforlisten 66005 /var/tmp/bdevperf.sock 00:30:06.720 08:26:39 -- common/autotest_common.sh@819 -- # '[' -z 66005 ']' 00:30:06.720 08:26:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:06.720 08:26:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:06.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:06.720 08:26:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:06.720 08:26:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:06.720 08:26:39 -- common/autotest_common.sh@10 -- # set +x 00:30:06.720 [2024-04-17 08:26:39.916120] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:30:06.720 [2024-04-17 08:26:39.916247] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66005 ] 00:30:06.978 [2024-04-17 08:26:40.063982] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:06.978 [2024-04-17 08:26:40.162474] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:07.546 08:26:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:07.546 08:26:40 -- common/autotest_common.sh@852 -- # return 0 00:30:07.546 08:26:40 -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:30:07.804 [2024-04-17 08:26:41.031470] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:07.804 TLSTESTn1 00:30:07.804 08:26:41 -- fips/fips.sh@155 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:08.063 Running I/O for 10 seconds... 00:30:18.081 00:30:18.081 Latency(us) 00:30:18.081 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:18.081 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:30:18.081 Verification LBA range: start 0x0 length 0x2000 00:30:18.081 TLSTESTn1 : 10.01 7247.16 28.31 0.00 0.00 17636.47 4063.80 19117.05 00:30:18.081 =================================================================================================================== 00:30:18.081 Total : 7247.16 28.31 0.00 0.00 17636.47 4063.80 19117.05 00:30:18.081 0 00:30:18.081 08:26:51 -- fips/fips.sh@1 -- # cleanup 00:30:18.081 08:26:51 -- fips/fips.sh@15 -- # process_shm --id 0 00:30:18.081 08:26:51 -- common/autotest_common.sh@796 -- # type=--id 00:30:18.081 08:26:51 -- common/autotest_common.sh@797 -- # id=0 00:30:18.081 08:26:51 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:30:18.081 08:26:51 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:30:18.081 08:26:51 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:30:18.081 08:26:51 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:30:18.081 08:26:51 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:30:18.081 08:26:51 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:30:18.081 nvmf_trace.0 00:30:18.081 08:26:51 -- common/autotest_common.sh@811 -- # return 0 00:30:18.081 08:26:51 -- fips/fips.sh@16 -- # killprocess 66005 00:30:18.081 08:26:51 -- common/autotest_common.sh@926 -- # '[' -z 66005 ']' 00:30:18.081 08:26:51 -- common/autotest_common.sh@930 -- # kill -0 66005 00:30:18.081 08:26:51 -- common/autotest_common.sh@931 -- # uname 00:30:18.081 08:26:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:18.081 08:26:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 66005 00:30:18.081 08:26:51 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:30:18.081 08:26:51 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:30:18.081 08:26:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 66005' 00:30:18.081 killing process with pid 66005 00:30:18.081 08:26:51 -- common/autotest_common.sh@945 -- # kill 66005 00:30:18.081 Received shutdown signal, test time was about 10.000000 seconds 00:30:18.081 00:30:18.081 Latency(us) 00:30:18.081 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:18.081 =================================================================================================================== 00:30:18.081 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:18.081 08:26:51 -- common/autotest_common.sh@950 -- # wait 66005 00:30:18.338 08:26:51 -- fips/fips.sh@17 -- # nvmftestfini 00:30:18.338 08:26:51 -- nvmf/common.sh@476 -- # nvmfcleanup 00:30:18.338 08:26:51 -- nvmf/common.sh@116 -- # sync 00:30:18.338 08:26:51 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:30:18.338 08:26:51 -- nvmf/common.sh@119 -- # set +e 00:30:18.338 08:26:51 -- nvmf/common.sh@120 -- # for i in {1..20} 00:30:18.338 08:26:51 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:30:18.338 rmmod nvme_tcp 00:30:18.338 rmmod nvme_fabrics 00:30:18.338 rmmod nvme_keyring 00:30:18.596 08:26:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:30:18.596 08:26:51 -- nvmf/common.sh@123 -- # set -e 00:30:18.596 08:26:51 -- nvmf/common.sh@124 -- # return 0 00:30:18.596 08:26:51 -- nvmf/common.sh@477 -- # '[' -n 65967 ']' 00:30:18.596 08:26:51 -- nvmf/common.sh@478 -- # killprocess 65967 00:30:18.596 08:26:51 -- common/autotest_common.sh@926 -- # '[' -z 65967 ']' 00:30:18.596 08:26:51 -- common/autotest_common.sh@930 -- # kill -0 65967 00:30:18.596 08:26:51 -- common/autotest_common.sh@931 -- # uname 00:30:18.596 08:26:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:18.596 08:26:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 65967 00:30:18.596 killing process with pid 65967 00:30:18.596 08:26:51 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:30:18.596 08:26:51 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:30:18.596 08:26:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 65967' 00:30:18.596 08:26:51 -- common/autotest_common.sh@945 -- # kill 65967 00:30:18.596 08:26:51 -- common/autotest_common.sh@950 -- # wait 65967 00:30:18.854 08:26:51 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:30:18.854 08:26:51 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:30:18.854 08:26:51 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:30:18.854 08:26:51 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:18.854 08:26:51 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:30:18.854 08:26:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:18.854 08:26:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:18.854 08:26:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:18.854 08:26:52 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:30:18.854 08:26:52 -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:30:18.854 ************************************ 00:30:18.854 END TEST nvmf_fips 00:30:18.854 ************************************ 00:30:18.854 00:30:18.854 real 0m14.175s 00:30:18.854 user 0m19.449s 00:30:18.854 sys 0m5.528s 00:30:18.854 08:26:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:18.854 08:26:52 -- common/autotest_common.sh@10 -- # set +x 00:30:18.854 08:26:52 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:30:18.854 08:26:52 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:30:18.854 08:26:52 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:30:18.854 08:26:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:18.854 08:26:52 -- common/autotest_common.sh@10 -- # set +x 00:30:18.854 ************************************ 00:30:18.854 START TEST nvmf_fuzz 00:30:18.854 ************************************ 00:30:18.854 08:26:52 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:30:18.854 * Looking for test storage... 00:30:18.854 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:30:18.854 08:26:52 -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:18.854 08:26:52 -- nvmf/common.sh@7 -- # uname -s 00:30:18.854 08:26:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:18.854 08:26:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:18.854 08:26:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:18.854 08:26:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:18.854 08:26:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:18.854 08:26:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:18.854 08:26:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:18.854 08:26:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:18.854 08:26:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:18.854 08:26:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:18.854 08:26:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ce38300-f67f-48af-81f9-d51a7c54746d 00:30:18.854 08:26:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ce38300-f67f-48af-81f9-d51a7c54746d 00:30:18.854 08:26:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:18.854 08:26:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:18.854 08:26:52 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:30:18.854 08:26:52 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:18.854 08:26:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:18.854 08:26:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:18.854 08:26:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:18.854 08:26:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:18.854 08:26:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:18.855 08:26:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:18.855 08:26:52 -- paths/export.sh@5 -- # export PATH 00:30:18.855 08:26:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:18.855 08:26:52 -- nvmf/common.sh@46 -- # : 0 00:30:18.855 08:26:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:18.855 08:26:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:18.855 08:26:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:18.855 08:26:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:18.855 08:26:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:18.855 08:26:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:18.855 08:26:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:18.855 08:26:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:19.112 08:26:52 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:30:19.112 08:26:52 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:30:19.112 08:26:52 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:19.112 08:26:52 -- nvmf/common.sh@436 -- # prepare_net_devs 00:30:19.112 08:26:52 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:30:19.112 08:26:52 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:30:19.112 08:26:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:19.112 08:26:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:19.112 08:26:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:19.112 08:26:52 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:30:19.112 08:26:52 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:30:19.112 08:26:52 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:30:19.112 08:26:52 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:30:19.112 08:26:52 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:30:19.112 08:26:52 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:30:19.112 08:26:52 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:19.112 08:26:52 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:19.112 08:26:52 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:30:19.112 08:26:52 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:30:19.112 08:26:52 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:30:19.112 08:26:52 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:30:19.112 08:26:52 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:30:19.113 08:26:52 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:19.113 08:26:52 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:30:19.113 08:26:52 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:30:19.113 08:26:52 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:30:19.113 08:26:52 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:30:19.113 08:26:52 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:30:19.113 08:26:52 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:30:19.113 Cannot find device "nvmf_tgt_br" 00:30:19.113 08:26:52 -- nvmf/common.sh@154 -- # true 00:30:19.113 08:26:52 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:30:19.113 Cannot find device "nvmf_tgt_br2" 00:30:19.113 08:26:52 -- nvmf/common.sh@155 -- # true 00:30:19.113 08:26:52 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:30:19.113 08:26:52 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:30:19.113 Cannot find device "nvmf_tgt_br" 00:30:19.113 08:26:52 -- nvmf/common.sh@157 -- # true 00:30:19.113 08:26:52 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:30:19.113 Cannot find device "nvmf_tgt_br2" 00:30:19.113 08:26:52 -- nvmf/common.sh@158 -- # true 00:30:19.113 08:26:52 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:30:19.113 08:26:52 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:30:19.113 08:26:52 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:19.113 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:19.113 08:26:52 -- nvmf/common.sh@161 -- # true 00:30:19.113 08:26:52 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:19.113 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:19.113 08:26:52 -- nvmf/common.sh@162 -- # true 00:30:19.113 08:26:52 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:30:19.113 08:26:52 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:30:19.113 08:26:52 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:30:19.113 08:26:52 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:30:19.113 08:26:52 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:30:19.113 08:26:52 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:30:19.113 08:26:52 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:30:19.113 08:26:52 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:30:19.113 08:26:52 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:30:19.113 08:26:52 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:30:19.113 08:26:52 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:30:19.113 08:26:52 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:30:19.113 08:26:52 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:30:19.113 08:26:52 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:30:19.113 08:26:52 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:30:19.113 08:26:52 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:30:19.371 08:26:52 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:30:19.371 08:26:52 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:30:19.371 08:26:52 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:30:19.371 08:26:52 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:30:19.371 08:26:52 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:30:19.371 08:26:52 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:30:19.371 08:26:52 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:30:19.371 08:26:52 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:30:19.371 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:19.371 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.107 ms 00:30:19.371 00:30:19.371 --- 10.0.0.2 ping statistics --- 00:30:19.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:19.371 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:30:19.371 08:26:52 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:30:19.371 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:30:19.371 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.092 ms 00:30:19.371 00:30:19.371 --- 10.0.0.3 ping statistics --- 00:30:19.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:19.371 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:30:19.371 08:26:52 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:30:19.371 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:19.371 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:30:19.371 00:30:19.371 --- 10.0.0.1 ping statistics --- 00:30:19.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:19.371 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:30:19.371 08:26:52 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:19.371 08:26:52 -- nvmf/common.sh@421 -- # return 0 00:30:19.371 08:26:52 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:30:19.371 08:26:52 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:19.371 08:26:52 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:30:19.371 08:26:52 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:30:19.371 08:26:52 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:19.371 08:26:52 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:30:19.371 08:26:52 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:30:19.371 08:26:52 -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:30:19.371 08:26:52 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=66332 00:30:19.371 08:26:52 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:30:19.371 08:26:52 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 66332 00:30:19.371 08:26:52 -- common/autotest_common.sh@819 -- # '[' -z 66332 ']' 00:30:19.371 08:26:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:19.371 08:26:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:19.371 08:26:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:19.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:19.371 08:26:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:19.371 08:26:52 -- common/autotest_common.sh@10 -- # set +x 00:30:20.306 08:26:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:20.306 08:26:53 -- common/autotest_common.sh@852 -- # return 0 00:30:20.306 08:26:53 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:20.306 08:26:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:20.306 08:26:53 -- common/autotest_common.sh@10 -- # set +x 00:30:20.306 08:26:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:20.306 08:26:53 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:30:20.306 08:26:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:20.306 08:26:53 -- common/autotest_common.sh@10 -- # set +x 00:30:20.306 Malloc0 00:30:20.306 08:26:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:20.306 08:26:53 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:20.306 08:26:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:20.306 08:26:53 -- common/autotest_common.sh@10 -- # set +x 00:30:20.306 08:26:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:20.306 08:26:53 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:20.306 08:26:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:20.306 08:26:53 -- common/autotest_common.sh@10 -- # set +x 00:30:20.306 08:26:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:20.306 08:26:53 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:20.306 08:26:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:20.306 08:26:53 -- common/autotest_common.sh@10 -- # set +x 00:30:20.306 08:26:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:20.306 08:26:53 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:30:20.306 08:26:53 -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:30:20.873 Shutting down the fuzz application 00:30:20.873 08:26:54 -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:30:21.439 Shutting down the fuzz application 00:30:21.439 08:26:54 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:21.439 08:26:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:21.439 08:26:54 -- common/autotest_common.sh@10 -- # set +x 00:30:21.439 08:26:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:21.439 08:26:54 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:30:21.439 08:26:54 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:30:21.439 08:26:54 -- nvmf/common.sh@476 -- # nvmfcleanup 00:30:21.439 08:26:54 -- nvmf/common.sh@116 -- # sync 00:30:21.439 08:26:54 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:30:21.439 08:26:54 -- nvmf/common.sh@119 -- # set +e 00:30:21.439 08:26:54 -- nvmf/common.sh@120 -- # for i in {1..20} 00:30:21.439 08:26:54 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:30:21.439 rmmod nvme_tcp 00:30:21.439 rmmod nvme_fabrics 00:30:21.439 rmmod nvme_keyring 00:30:21.439 08:26:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:30:21.439 08:26:54 -- nvmf/common.sh@123 -- # set -e 00:30:21.439 08:26:54 -- nvmf/common.sh@124 -- # return 0 00:30:21.439 08:26:54 -- nvmf/common.sh@477 -- # '[' -n 66332 ']' 00:30:21.439 08:26:54 -- nvmf/common.sh@478 -- # killprocess 66332 00:30:21.439 08:26:54 -- common/autotest_common.sh@926 -- # '[' -z 66332 ']' 00:30:21.439 08:26:54 -- common/autotest_common.sh@930 -- # kill -0 66332 00:30:21.439 08:26:54 -- common/autotest_common.sh@931 -- # uname 00:30:21.439 08:26:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:21.439 08:26:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 66332 00:30:21.439 08:26:54 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:21.439 08:26:54 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:21.439 killing process with pid 66332 00:30:21.439 08:26:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 66332' 00:30:21.439 08:26:54 -- common/autotest_common.sh@945 -- # kill 66332 00:30:21.439 08:26:54 -- common/autotest_common.sh@950 -- # wait 66332 00:30:21.696 08:26:54 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:30:21.696 08:26:54 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:30:21.696 08:26:54 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:30:21.696 08:26:54 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:21.696 08:26:54 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:30:21.696 08:26:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:21.696 08:26:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:21.696 08:26:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:21.696 08:26:54 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:30:21.696 08:26:54 -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:30:21.696 00:30:21.696 real 0m2.913s 00:30:21.696 user 0m3.223s 00:30:21.696 sys 0m0.671s 00:30:21.696 08:26:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:21.696 08:26:54 -- common/autotest_common.sh@10 -- # set +x 00:30:21.696 ************************************ 00:30:21.696 END TEST nvmf_fuzz 00:30:21.696 ************************************ 00:30:21.984 08:26:55 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:30:21.984 08:26:55 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:30:21.984 08:26:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:21.984 08:26:55 -- common/autotest_common.sh@10 -- # set +x 00:30:21.984 ************************************ 00:30:21.984 START TEST nvmf_multiconnection 00:30:21.984 ************************************ 00:30:21.984 08:26:55 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:30:21.984 * Looking for test storage... 00:30:21.984 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:30:21.984 08:26:55 -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:21.984 08:26:55 -- nvmf/common.sh@7 -- # uname -s 00:30:21.984 08:26:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:21.984 08:26:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:21.984 08:26:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:21.984 08:26:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:21.984 08:26:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:21.984 08:26:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:21.984 08:26:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:21.984 08:26:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:21.984 08:26:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:21.984 08:26:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:21.984 08:26:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ce38300-f67f-48af-81f9-d51a7c54746d 00:30:21.984 08:26:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ce38300-f67f-48af-81f9-d51a7c54746d 00:30:21.984 08:26:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:21.984 08:26:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:21.984 08:26:55 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:30:21.984 08:26:55 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:21.984 08:26:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:21.984 08:26:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:21.984 08:26:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:21.984 08:26:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:21.984 08:26:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:21.984 08:26:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:21.984 08:26:55 -- paths/export.sh@5 -- # export PATH 00:30:21.984 08:26:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:21.984 08:26:55 -- nvmf/common.sh@46 -- # : 0 00:30:21.984 08:26:55 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:21.984 08:26:55 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:21.984 08:26:55 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:21.984 08:26:55 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:21.984 08:26:55 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:21.984 08:26:55 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:21.984 08:26:55 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:21.984 08:26:55 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:21.984 08:26:55 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:21.984 08:26:55 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:21.984 08:26:55 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:30:21.984 08:26:55 -- target/multiconnection.sh@16 -- # nvmftestinit 00:30:21.984 08:26:55 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:30:21.984 08:26:55 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:21.985 08:26:55 -- nvmf/common.sh@436 -- # prepare_net_devs 00:30:21.985 08:26:55 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:30:21.985 08:26:55 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:30:21.985 08:26:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:21.985 08:26:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:21.985 08:26:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:21.985 08:26:55 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:30:21.985 08:26:55 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:30:21.985 08:26:55 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:30:21.985 08:26:55 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:30:21.985 08:26:55 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:30:21.985 08:26:55 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:30:21.985 08:26:55 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:21.985 08:26:55 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:21.985 08:26:55 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:30:21.985 08:26:55 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:30:21.985 08:26:55 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:30:21.985 08:26:55 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:30:21.985 08:26:55 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:30:21.985 08:26:55 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:21.985 08:26:55 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:30:21.985 08:26:55 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:30:21.985 08:26:55 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:30:21.985 08:26:55 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:30:21.985 08:26:55 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:30:21.985 08:26:55 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:30:21.985 Cannot find device "nvmf_tgt_br" 00:30:21.985 08:26:55 -- nvmf/common.sh@154 -- # true 00:30:21.985 08:26:55 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:30:21.985 Cannot find device "nvmf_tgt_br2" 00:30:21.985 08:26:55 -- nvmf/common.sh@155 -- # true 00:30:21.985 08:26:55 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:30:21.985 08:26:55 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:30:21.985 Cannot find device "nvmf_tgt_br" 00:30:21.985 08:26:55 -- nvmf/common.sh@157 -- # true 00:30:21.985 08:26:55 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:30:21.985 Cannot find device "nvmf_tgt_br2" 00:30:21.985 08:26:55 -- nvmf/common.sh@158 -- # true 00:30:21.985 08:26:55 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:30:22.246 08:26:55 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:30:22.246 08:26:55 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:22.246 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:22.246 08:26:55 -- nvmf/common.sh@161 -- # true 00:30:22.246 08:26:55 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:22.246 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:22.246 08:26:55 -- nvmf/common.sh@162 -- # true 00:30:22.246 08:26:55 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:30:22.246 08:26:55 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:30:22.246 08:26:55 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:30:22.246 08:26:55 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:30:22.246 08:26:55 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:30:22.246 08:26:55 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:30:22.246 08:26:55 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:30:22.246 08:26:55 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:30:22.246 08:26:55 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:30:22.246 08:26:55 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:30:22.246 08:26:55 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:30:22.246 08:26:55 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:30:22.246 08:26:55 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:30:22.246 08:26:55 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:30:22.246 08:26:55 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:30:22.246 08:26:55 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:30:22.246 08:26:55 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:30:22.246 08:26:55 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:30:22.246 08:26:55 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:30:22.246 08:26:55 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:30:22.246 08:26:55 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:30:22.246 08:26:55 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:30:22.246 08:26:55 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:30:22.246 08:26:55 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:30:22.246 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:22.246 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.112 ms 00:30:22.246 00:30:22.246 --- 10.0.0.2 ping statistics --- 00:30:22.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:22.246 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:30:22.246 08:26:55 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:30:22.246 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:30:22.246 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:30:22.246 00:30:22.246 --- 10.0.0.3 ping statistics --- 00:30:22.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:22.246 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:30:22.246 08:26:55 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:30:22.246 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:22.246 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.059 ms 00:30:22.246 00:30:22.246 --- 10.0.0.1 ping statistics --- 00:30:22.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:22.246 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:30:22.246 08:26:55 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:22.246 08:26:55 -- nvmf/common.sh@421 -- # return 0 00:30:22.246 08:26:55 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:30:22.246 08:26:55 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:22.246 08:26:55 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:30:22.503 08:26:55 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:30:22.503 08:26:55 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:22.503 08:26:55 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:30:22.503 08:26:55 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:30:22.503 08:26:55 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:30:22.503 08:26:55 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:30:22.503 08:26:55 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:22.503 08:26:55 -- common/autotest_common.sh@10 -- # set +x 00:30:22.503 08:26:55 -- nvmf/common.sh@469 -- # nvmfpid=66525 00:30:22.503 08:26:55 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:22.503 08:26:55 -- nvmf/common.sh@470 -- # waitforlisten 66525 00:30:22.503 08:26:55 -- common/autotest_common.sh@819 -- # '[' -z 66525 ']' 00:30:22.503 08:26:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:22.503 08:26:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:22.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:22.503 08:26:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:22.503 08:26:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:22.503 08:26:55 -- common/autotest_common.sh@10 -- # set +x 00:30:22.503 [2024-04-17 08:26:55.666728] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:30:22.503 [2024-04-17 08:26:55.666801] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:22.503 [2024-04-17 08:26:55.810370] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:22.763 [2024-04-17 08:26:55.918581] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:22.763 [2024-04-17 08:26:55.918758] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:22.763 [2024-04-17 08:26:55.918770] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:22.763 [2024-04-17 08:26:55.918777] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:22.763 [2024-04-17 08:26:55.918904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:22.763 [2024-04-17 08:26:55.919089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:22.763 [2024-04-17 08:26:55.919216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:22.763 [2024-04-17 08:26:55.919205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:23.330 08:26:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:23.330 08:26:56 -- common/autotest_common.sh@852 -- # return 0 00:30:23.330 08:26:56 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:30:23.330 08:26:56 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:23.330 08:26:56 -- common/autotest_common.sh@10 -- # set +x 00:30:23.330 08:26:56 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:23.330 08:26:56 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:23.330 08:26:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:23.330 08:26:56 -- common/autotest_common.sh@10 -- # set +x 00:30:23.330 [2024-04-17 08:26:56.592816] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:23.330 08:26:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:23.330 08:26:56 -- target/multiconnection.sh@21 -- # seq 1 11 00:30:23.330 08:26:56 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:30:23.330 08:26:56 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:30:23.330 08:26:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:23.330 08:26:56 -- common/autotest_common.sh@10 -- # set +x 00:30:23.330 Malloc1 00:30:23.331 08:26:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:23.331 08:26:56 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:30:23.331 08:26:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:23.331 08:26:56 -- common/autotest_common.sh@10 -- # set +x 00:30:23.589 08:26:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:23.589 08:26:56 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:30:23.589 08:26:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:23.589 08:26:56 -- common/autotest_common.sh@10 -- # set +x 00:30:23.589 08:26:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:23.589 08:26:56 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:23.589 08:26:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:23.589 08:26:56 -- common/autotest_common.sh@10 -- # set +x 00:30:23.589 [2024-04-17 08:26:56.684324] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:23.589 08:26:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:23.589 08:26:56 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:30:23.589 08:26:56 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:30:23.589 08:26:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:23.589 08:26:56 -- common/autotest_common.sh@10 -- # set +x 00:30:23.589 Malloc2 00:30:23.589 08:26:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:23.589 08:26:56 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:30:23.589 08:26:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:23.589 08:26:56 -- common/autotest_common.sh@10 -- # set +x 00:30:23.589 08:26:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:23.589 08:26:56 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:30:23.589 08:26:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:23.589 08:26:56 -- common/autotest_common.sh@10 -- # set +x 00:30:23.589 08:26:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:23.589 08:26:56 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:23.589 08:26:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:23.589 08:26:56 -- common/autotest_common.sh@10 -- # set +x 00:30:23.589 08:26:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:23.589 08:26:56 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:30:23.589 08:26:56 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:30:23.589 08:26:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:23.589 08:26:56 -- common/autotest_common.sh@10 -- # set +x 00:30:23.589 Malloc3 00:30:23.589 08:26:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:23.589 08:26:56 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:30:23.589 08:26:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:23.589 08:26:56 -- common/autotest_common.sh@10 -- # set +x 00:30:23.589 08:26:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:23.589 08:26:56 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:30:23.589 08:26:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:23.589 08:26:56 -- common/autotest_common.sh@10 -- # set +x 00:30:23.589 08:26:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:23.589 08:26:56 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:30:23.589 08:26:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:23.589 08:26:56 -- common/autotest_common.sh@10 -- # set +x 00:30:23.589 08:26:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:23.589 08:26:56 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:30:23.589 08:26:56 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:30:23.589 08:26:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:23.589 08:26:56 -- common/autotest_common.sh@10 -- # set +x 00:30:23.589 Malloc4 00:30:23.589 08:26:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:23.589 08:26:56 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:30:23.589 08:26:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:23.589 08:26:56 -- common/autotest_common.sh@10 -- # set +x 00:30:23.589 08:26:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:23.589 08:26:56 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:30:23.589 08:26:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:23.589 08:26:56 -- common/autotest_common.sh@10 -- # set +x 00:30:23.589 08:26:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:23.589 08:26:56 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:30:23.589 08:26:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:23.589 08:26:56 -- common/autotest_common.sh@10 -- # set +x 00:30:23.589 08:26:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:23.589 08:26:56 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:30:23.589 08:26:56 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:30:23.590 08:26:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:23.590 08:26:56 -- common/autotest_common.sh@10 -- # set +x 00:30:23.590 Malloc5 00:30:23.590 08:26:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:23.590 08:26:56 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:30:23.590 08:26:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:23.590 08:26:56 -- common/autotest_common.sh@10 -- # set +x 00:30:23.590 08:26:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:23.590 08:26:56 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:30:23.590 08:26:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:23.590 08:26:56 -- common/autotest_common.sh@10 -- # set +x 00:30:23.590 08:26:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:23.590 08:26:56 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:30:23.590 08:26:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:23.590 08:26:56 -- common/autotest_common.sh@10 -- # set +x 00:30:23.590 08:26:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:23.590 08:26:56 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:30:23.590 08:26:56 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:30:23.590 08:26:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:23.590 08:26:56 -- common/autotest_common.sh@10 -- # set +x 00:30:23.590 Malloc6 00:30:23.590 08:26:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:23.590 08:26:56 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:30:23.590 08:26:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:23.590 08:26:56 -- common/autotest_common.sh@10 -- # set +x 00:30:23.590 08:26:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:23.590 08:26:56 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:30:23.590 08:26:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:23.590 08:26:56 -- common/autotest_common.sh@10 -- # set +x 00:30:23.849 08:26:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:23.849 08:26:56 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:30:23.849 08:26:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:23.849 08:26:56 -- common/autotest_common.sh@10 -- # set +x 00:30:23.849 08:26:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:23.849 08:26:56 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:30:23.849 08:26:56 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:30:23.849 08:26:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:23.849 08:26:56 -- common/autotest_common.sh@10 -- # set +x 00:30:23.849 Malloc7 00:30:23.849 08:26:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:23.849 08:26:56 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:30:23.849 08:26:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:23.849 08:26:56 -- common/autotest_common.sh@10 -- # set +x 00:30:23.849 08:26:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:23.849 08:26:56 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:30:23.849 08:26:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:23.849 08:26:56 -- common/autotest_common.sh@10 -- # set +x 00:30:23.849 08:26:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:23.849 08:26:56 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:30:23.849 08:26:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:23.849 08:26:56 -- common/autotest_common.sh@10 -- # set +x 00:30:23.849 08:26:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:23.849 08:26:56 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:30:23.849 08:26:56 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:30:23.849 08:26:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:23.849 08:26:56 -- common/autotest_common.sh@10 -- # set +x 00:30:23.849 Malloc8 00:30:23.849 08:26:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:23.849 08:26:57 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:30:23.849 08:26:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:23.849 08:26:57 -- common/autotest_common.sh@10 -- # set +x 00:30:23.849 08:26:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:23.849 08:26:57 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:30:23.849 08:26:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:23.849 08:26:57 -- common/autotest_common.sh@10 -- # set +x 00:30:23.849 08:26:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:23.849 08:26:57 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:30:23.849 08:26:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:23.849 08:26:57 -- common/autotest_common.sh@10 -- # set +x 00:30:23.849 08:26:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:23.849 08:26:57 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:30:23.849 08:26:57 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:30:23.849 08:26:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:23.849 08:26:57 -- common/autotest_common.sh@10 -- # set +x 00:30:23.849 Malloc9 00:30:23.849 08:26:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:23.849 08:26:57 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:30:23.849 08:26:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:23.849 08:26:57 -- common/autotest_common.sh@10 -- # set +x 00:30:23.849 08:26:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:23.849 08:26:57 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:30:23.849 08:26:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:23.849 08:26:57 -- common/autotest_common.sh@10 -- # set +x 00:30:23.849 08:26:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:23.849 08:26:57 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:30:23.849 08:26:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:23.849 08:26:57 -- common/autotest_common.sh@10 -- # set +x 00:30:23.849 08:26:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:23.849 08:26:57 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:30:23.849 08:26:57 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:30:23.849 08:26:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:23.849 08:26:57 -- common/autotest_common.sh@10 -- # set +x 00:30:23.849 Malloc10 00:30:23.849 08:26:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:23.849 08:26:57 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:30:23.849 08:26:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:23.849 08:26:57 -- common/autotest_common.sh@10 -- # set +x 00:30:23.849 08:26:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:23.849 08:26:57 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:30:23.849 08:26:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:23.849 08:26:57 -- common/autotest_common.sh@10 -- # set +x 00:30:23.849 08:26:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:23.849 08:26:57 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:30:23.849 08:26:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:23.849 08:26:57 -- common/autotest_common.sh@10 -- # set +x 00:30:23.849 08:26:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:23.849 08:26:57 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:30:23.849 08:26:57 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:30:23.849 08:26:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:23.849 08:26:57 -- common/autotest_common.sh@10 -- # set +x 00:30:23.849 Malloc11 00:30:23.849 08:26:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:23.849 08:26:57 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:30:23.849 08:26:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:23.849 08:26:57 -- common/autotest_common.sh@10 -- # set +x 00:30:23.849 08:26:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:23.849 08:26:57 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:30:23.849 08:26:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:23.849 08:26:57 -- common/autotest_common.sh@10 -- # set +x 00:30:24.109 08:26:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:24.109 08:26:57 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:30:24.109 08:26:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:24.109 08:26:57 -- common/autotest_common.sh@10 -- # set +x 00:30:24.109 08:26:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:24.109 08:26:57 -- target/multiconnection.sh@28 -- # seq 1 11 00:30:24.109 08:26:57 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:30:24.109 08:26:57 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2ce38300-f67f-48af-81f9-d51a7c54746d --hostid=2ce38300-f67f-48af-81f9-d51a7c54746d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:30:24.109 08:26:57 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:30:24.109 08:26:57 -- common/autotest_common.sh@1177 -- # local i=0 00:30:24.109 08:26:57 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:30:24.109 08:26:57 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:30:24.109 08:26:57 -- common/autotest_common.sh@1184 -- # sleep 2 00:30:26.011 08:26:59 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:30:26.011 08:26:59 -- common/autotest_common.sh@1186 -- # grep -c SPDK1 00:30:26.011 08:26:59 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:30:26.268 08:26:59 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:30:26.268 08:26:59 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:30:26.268 08:26:59 -- common/autotest_common.sh@1187 -- # return 0 00:30:26.268 08:26:59 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:30:26.268 08:26:59 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2ce38300-f67f-48af-81f9-d51a7c54746d --hostid=2ce38300-f67f-48af-81f9-d51a7c54746d -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:30:26.268 08:26:59 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:30:26.268 08:26:59 -- common/autotest_common.sh@1177 -- # local i=0 00:30:26.269 08:26:59 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:30:26.269 08:26:59 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:30:26.269 08:26:59 -- common/autotest_common.sh@1184 -- # sleep 2 00:30:28.171 08:27:01 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:30:28.171 08:27:01 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:30:28.171 08:27:01 -- common/autotest_common.sh@1186 -- # grep -c SPDK2 00:30:28.171 08:27:01 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:30:28.171 08:27:01 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:30:28.171 08:27:01 -- common/autotest_common.sh@1187 -- # return 0 00:30:28.171 08:27:01 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:30:28.172 08:27:01 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2ce38300-f67f-48af-81f9-d51a7c54746d --hostid=2ce38300-f67f-48af-81f9-d51a7c54746d -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:30:28.430 08:27:01 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:30:28.430 08:27:01 -- common/autotest_common.sh@1177 -- # local i=0 00:30:28.430 08:27:01 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:30:28.430 08:27:01 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:30:28.430 08:27:01 -- common/autotest_common.sh@1184 -- # sleep 2 00:30:30.332 08:27:03 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:30:30.332 08:27:03 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:30:30.332 08:27:03 -- common/autotest_common.sh@1186 -- # grep -c SPDK3 00:30:30.333 08:27:03 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:30:30.333 08:27:03 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:30:30.333 08:27:03 -- common/autotest_common.sh@1187 -- # return 0 00:30:30.333 08:27:03 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:30:30.333 08:27:03 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2ce38300-f67f-48af-81f9-d51a7c54746d --hostid=2ce38300-f67f-48af-81f9-d51a7c54746d -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:30:30.592 08:27:03 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:30:30.592 08:27:03 -- common/autotest_common.sh@1177 -- # local i=0 00:30:30.592 08:27:03 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:30:30.592 08:27:03 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:30:30.592 08:27:03 -- common/autotest_common.sh@1184 -- # sleep 2 00:30:32.507 08:27:05 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:30:32.507 08:27:05 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:30:32.507 08:27:05 -- common/autotest_common.sh@1186 -- # grep -c SPDK4 00:30:32.507 08:27:05 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:30:32.507 08:27:05 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:30:32.507 08:27:05 -- common/autotest_common.sh@1187 -- # return 0 00:30:32.507 08:27:05 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:30:32.507 08:27:05 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2ce38300-f67f-48af-81f9-d51a7c54746d --hostid=2ce38300-f67f-48af-81f9-d51a7c54746d -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:30:32.796 08:27:05 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:30:32.796 08:27:05 -- common/autotest_common.sh@1177 -- # local i=0 00:30:32.796 08:27:05 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:30:32.796 08:27:05 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:30:32.796 08:27:05 -- common/autotest_common.sh@1184 -- # sleep 2 00:30:34.703 08:27:07 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:30:34.703 08:27:07 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:30:34.703 08:27:07 -- common/autotest_common.sh@1186 -- # grep -c SPDK5 00:30:34.703 08:27:07 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:30:34.703 08:27:07 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:30:34.703 08:27:07 -- common/autotest_common.sh@1187 -- # return 0 00:30:34.703 08:27:07 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:30:34.703 08:27:07 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2ce38300-f67f-48af-81f9-d51a7c54746d --hostid=2ce38300-f67f-48af-81f9-d51a7c54746d -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:30:34.963 08:27:08 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:30:34.963 08:27:08 -- common/autotest_common.sh@1177 -- # local i=0 00:30:34.963 08:27:08 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:30:34.963 08:27:08 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:30:34.963 08:27:08 -- common/autotest_common.sh@1184 -- # sleep 2 00:30:36.866 08:27:10 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:30:36.866 08:27:10 -- common/autotest_common.sh@1186 -- # grep -c SPDK6 00:30:36.866 08:27:10 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:30:36.866 08:27:10 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:30:36.866 08:27:10 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:30:36.866 08:27:10 -- common/autotest_common.sh@1187 -- # return 0 00:30:36.866 08:27:10 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:30:36.866 08:27:10 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2ce38300-f67f-48af-81f9-d51a7c54746d --hostid=2ce38300-f67f-48af-81f9-d51a7c54746d -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:30:37.124 08:27:10 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:30:37.124 08:27:10 -- common/autotest_common.sh@1177 -- # local i=0 00:30:37.124 08:27:10 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:30:37.124 08:27:10 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:30:37.124 08:27:10 -- common/autotest_common.sh@1184 -- # sleep 2 00:30:39.028 08:27:12 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:30:39.028 08:27:12 -- common/autotest_common.sh@1186 -- # grep -c SPDK7 00:30:39.028 08:27:12 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:30:39.028 08:27:12 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:30:39.028 08:27:12 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:30:39.028 08:27:12 -- common/autotest_common.sh@1187 -- # return 0 00:30:39.028 08:27:12 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:30:39.028 08:27:12 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2ce38300-f67f-48af-81f9-d51a7c54746d --hostid=2ce38300-f67f-48af-81f9-d51a7c54746d -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:30:39.288 08:27:12 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:30:39.288 08:27:12 -- common/autotest_common.sh@1177 -- # local i=0 00:30:39.288 08:27:12 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:30:39.288 08:27:12 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:30:39.288 08:27:12 -- common/autotest_common.sh@1184 -- # sleep 2 00:30:41.193 08:27:14 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:30:41.193 08:27:14 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:30:41.193 08:27:14 -- common/autotest_common.sh@1186 -- # grep -c SPDK8 00:30:41.193 08:27:14 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:30:41.193 08:27:14 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:30:41.193 08:27:14 -- common/autotest_common.sh@1187 -- # return 0 00:30:41.193 08:27:14 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:30:41.193 08:27:14 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2ce38300-f67f-48af-81f9-d51a7c54746d --hostid=2ce38300-f67f-48af-81f9-d51a7c54746d -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:30:41.452 08:27:14 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:30:41.452 08:27:14 -- common/autotest_common.sh@1177 -- # local i=0 00:30:41.452 08:27:14 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:30:41.452 08:27:14 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:30:41.452 08:27:14 -- common/autotest_common.sh@1184 -- # sleep 2 00:30:43.357 08:27:16 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:30:43.357 08:27:16 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:30:43.357 08:27:16 -- common/autotest_common.sh@1186 -- # grep -c SPDK9 00:30:43.357 08:27:16 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:30:43.357 08:27:16 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:30:43.357 08:27:16 -- common/autotest_common.sh@1187 -- # return 0 00:30:43.357 08:27:16 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:30:43.357 08:27:16 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2ce38300-f67f-48af-81f9-d51a7c54746d --hostid=2ce38300-f67f-48af-81f9-d51a7c54746d -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:30:43.615 08:27:16 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:30:43.615 08:27:16 -- common/autotest_common.sh@1177 -- # local i=0 00:30:43.615 08:27:16 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:30:43.615 08:27:16 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:30:43.615 08:27:16 -- common/autotest_common.sh@1184 -- # sleep 2 00:30:45.522 08:27:18 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:30:45.522 08:27:18 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:30:45.522 08:27:18 -- common/autotest_common.sh@1186 -- # grep -c SPDK10 00:30:45.522 08:27:18 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:30:45.522 08:27:18 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:30:45.522 08:27:18 -- common/autotest_common.sh@1187 -- # return 0 00:30:45.522 08:27:18 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:30:45.522 08:27:18 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2ce38300-f67f-48af-81f9-d51a7c54746d --hostid=2ce38300-f67f-48af-81f9-d51a7c54746d -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:30:45.781 08:27:18 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:30:45.781 08:27:18 -- common/autotest_common.sh@1177 -- # local i=0 00:30:45.781 08:27:18 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:30:45.781 08:27:18 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:30:45.781 08:27:18 -- common/autotest_common.sh@1184 -- # sleep 2 00:30:47.686 08:27:20 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:30:47.686 08:27:20 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:30:47.686 08:27:20 -- common/autotest_common.sh@1186 -- # grep -c SPDK11 00:30:47.686 08:27:20 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:30:47.686 08:27:20 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:30:47.686 08:27:20 -- common/autotest_common.sh@1187 -- # return 0 00:30:47.686 08:27:20 -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:30:47.686 [global] 00:30:47.686 thread=1 00:30:47.686 invalidate=1 00:30:47.686 rw=read 00:30:47.686 time_based=1 00:30:47.686 runtime=10 00:30:47.686 ioengine=libaio 00:30:47.686 direct=1 00:30:47.686 bs=262144 00:30:47.686 iodepth=64 00:30:47.686 norandommap=1 00:30:47.686 numjobs=1 00:30:47.686 00:30:47.686 [job0] 00:30:47.686 filename=/dev/nvme0n1 00:30:47.686 [job1] 00:30:47.686 filename=/dev/nvme10n1 00:30:47.686 [job2] 00:30:47.686 filename=/dev/nvme1n1 00:30:47.686 [job3] 00:30:47.686 filename=/dev/nvme2n1 00:30:47.686 [job4] 00:30:47.686 filename=/dev/nvme3n1 00:30:47.686 [job5] 00:30:47.686 filename=/dev/nvme4n1 00:30:47.686 [job6] 00:30:47.686 filename=/dev/nvme5n1 00:30:47.945 [job7] 00:30:47.945 filename=/dev/nvme6n1 00:30:47.945 [job8] 00:30:47.945 filename=/dev/nvme7n1 00:30:47.945 [job9] 00:30:47.945 filename=/dev/nvme8n1 00:30:47.946 [job10] 00:30:47.946 filename=/dev/nvme9n1 00:30:47.946 Could not set queue depth (nvme0n1) 00:30:47.946 Could not set queue depth (nvme10n1) 00:30:47.946 Could not set queue depth (nvme1n1) 00:30:47.946 Could not set queue depth (nvme2n1) 00:30:47.946 Could not set queue depth (nvme3n1) 00:30:47.946 Could not set queue depth (nvme4n1) 00:30:47.946 Could not set queue depth (nvme5n1) 00:30:47.946 Could not set queue depth (nvme6n1) 00:30:47.946 Could not set queue depth (nvme7n1) 00:30:47.946 Could not set queue depth (nvme8n1) 00:30:47.946 Could not set queue depth (nvme9n1) 00:30:48.205 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:30:48.205 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:30:48.205 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:30:48.205 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:30:48.205 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:30:48.205 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:30:48.205 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:30:48.205 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:30:48.205 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:30:48.205 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:30:48.205 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:30:48.205 fio-3.35 00:30:48.205 Starting 11 threads 00:31:00.412 00:31:00.412 job0: (groupid=0, jobs=1): err= 0: pid=66985: Wed Apr 17 08:27:31 2024 00:31:00.412 read: IOPS=732, BW=183MiB/s (192MB/s)(1839MiB/10037msec) 00:31:00.412 slat (usec): min=19, max=55864, avg=1355.20, stdev=3197.83 00:31:00.412 clat (msec): min=21, max=171, avg=85.86, stdev=18.38 00:31:00.412 lat (msec): min=21, max=183, avg=87.22, stdev=18.63 00:31:00.412 clat percentiles (msec): 00:31:00.412 | 1.00th=[ 52], 5.00th=[ 58], 10.00th=[ 62], 20.00th=[ 72], 00:31:00.412 | 30.00th=[ 80], 40.00th=[ 84], 50.00th=[ 87], 60.00th=[ 90], 00:31:00.412 | 70.00th=[ 92], 80.00th=[ 95], 90.00th=[ 103], 95.00th=[ 122], 00:31:00.412 | 99.00th=[ 148], 99.50th=[ 150], 99.90th=[ 165], 99.95th=[ 165], 00:31:00.412 | 99.99th=[ 171] 00:31:00.412 bw ( KiB/s): min=114176, max=263680, per=9.67%, avg=186603.35, stdev=34736.80, samples=20 00:31:00.412 iops : min= 446, max= 1030, avg=728.85, stdev=135.57, samples=20 00:31:00.412 lat (msec) : 50=0.44%, 100=88.22%, 250=11.34% 00:31:00.412 cpu : usr=0.42%, sys=4.14%, ctx=1689, majf=0, minf=4097 00:31:00.412 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:31:00.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.412 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:31:00.412 issued rwts: total=7354,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:00.412 latency : target=0, window=0, percentile=100.00%, depth=64 00:31:00.412 job1: (groupid=0, jobs=1): err= 0: pid=66986: Wed Apr 17 08:27:31 2024 00:31:00.412 read: IOPS=947, BW=237MiB/s (248MB/s)(2396MiB/10119msec) 00:31:00.412 slat (usec): min=16, max=183038, avg=1039.30, stdev=4576.58 00:31:00.412 clat (msec): min=7, max=275, avg=66.37, stdev=64.04 00:31:00.412 lat (msec): min=9, max=410, avg=67.41, stdev=65.12 00:31:00.412 clat percentiles (msec): 00:31:00.412 | 1.00th=[ 23], 5.00th=[ 29], 10.00th=[ 31], 20.00th=[ 31], 00:31:00.412 | 30.00th=[ 32], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:31:00.412 | 70.00th=[ 35], 80.00th=[ 161], 90.00th=[ 182], 95.00th=[ 199], 00:31:00.412 | 99.00th=[ 232], 99.50th=[ 257], 99.90th=[ 268], 99.95th=[ 268], 00:31:00.412 | 99.99th=[ 275] 00:31:00.412 bw ( KiB/s): min=73580, max=508928, per=12.64%, avg=243719.95, stdev=197883.10, samples=20 00:31:00.412 iops : min= 287, max= 1988, avg=952.00, stdev=773.01, samples=20 00:31:00.412 lat (msec) : 10=0.03%, 20=0.76%, 50=75.87%, 100=0.34%, 250=22.46% 00:31:00.412 lat (msec) : 500=0.53% 00:31:00.412 cpu : usr=0.44%, sys=5.41%, ctx=2381, majf=0, minf=4097 00:31:00.412 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:31:00.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.412 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:31:00.412 issued rwts: total=9585,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:00.412 latency : target=0, window=0, percentile=100.00%, depth=64 00:31:00.412 job2: (groupid=0, jobs=1): err= 0: pid=66987: Wed Apr 17 08:27:31 2024 00:31:00.412 read: IOPS=362, BW=90.5MiB/s (94.9MB/s)(916MiB/10116msec) 00:31:00.412 slat (usec): min=16, max=98492, avg=2698.05, stdev=7245.01 00:31:00.412 clat (msec): min=24, max=269, avg=173.77, stdev=25.52 00:31:00.412 lat (msec): min=25, max=302, avg=176.47, stdev=26.41 00:31:00.412 clat percentiles (msec): 00:31:00.412 | 1.00th=[ 69], 5.00th=[ 140], 10.00th=[ 153], 20.00th=[ 161], 00:31:00.412 | 30.00th=[ 165], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 178], 00:31:00.412 | 70.00th=[ 184], 80.00th=[ 192], 90.00th=[ 201], 95.00th=[ 213], 00:31:00.412 | 99.00th=[ 236], 99.50th=[ 247], 99.90th=[ 264], 99.95th=[ 271], 00:31:00.412 | 99.99th=[ 271] 00:31:00.412 bw ( KiB/s): min=77824, max=110592, per=4.77%, avg=92079.65, stdev=8782.20, samples=20 00:31:00.412 iops : min= 304, max= 432, avg=359.65, stdev=34.30, samples=20 00:31:00.412 lat (msec) : 50=0.14%, 100=1.75%, 250=97.76%, 500=0.35% 00:31:00.412 cpu : usr=0.21%, sys=2.04%, ctx=919, majf=0, minf=4097 00:31:00.412 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:31:00.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.412 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:31:00.412 issued rwts: total=3662,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:00.412 latency : target=0, window=0, percentile=100.00%, depth=64 00:31:00.412 job3: (groupid=0, jobs=1): err= 0: pid=66988: Wed Apr 17 08:27:31 2024 00:31:00.412 read: IOPS=1055, BW=264MiB/s (277MB/s)(2644MiB/10024msec) 00:31:00.412 slat (usec): min=15, max=77654, avg=938.75, stdev=2643.26 00:31:00.412 clat (msec): min=4, max=208, avg=59.62, stdev=30.48 00:31:00.412 lat (msec): min=4, max=224, avg=60.55, stdev=30.93 00:31:00.412 clat percentiles (msec): 00:31:00.412 | 1.00th=[ 29], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 34], 00:31:00.412 | 30.00th=[ 35], 40.00th=[ 36], 50.00th=[ 54], 60.00th=[ 63], 00:31:00.412 | 70.00th=[ 80], 80.00th=[ 89], 90.00th=[ 96], 95.00th=[ 107], 00:31:00.412 | 99.00th=[ 163], 99.50th=[ 176], 99.90th=[ 192], 99.95th=[ 192], 00:31:00.412 | 99.99th=[ 194] 00:31:00.412 bw ( KiB/s): min=82432, max=484352, per=13.94%, avg=268981.85, stdev=130908.29, samples=20 00:31:00.412 iops : min= 322, max= 1892, avg=1050.60, stdev=511.41, samples=20 00:31:00.412 lat (msec) : 10=0.11%, 20=0.31%, 50=47.42%, 100=45.05%, 250=7.10% 00:31:00.412 cpu : usr=0.58%, sys=5.28%, ctx=2333, majf=0, minf=4097 00:31:00.412 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:31:00.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.412 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:31:00.412 issued rwts: total=10577,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:00.412 latency : target=0, window=0, percentile=100.00%, depth=64 00:31:00.412 job4: (groupid=0, jobs=1): err= 0: pid=66989: Wed Apr 17 08:27:31 2024 00:31:00.412 read: IOPS=743, BW=186MiB/s (195MB/s)(1860MiB/10007msec) 00:31:00.412 slat (usec): min=18, max=36158, avg=1334.06, stdev=3134.49 00:31:00.412 clat (msec): min=4, max=172, avg=84.60, stdev=21.13 00:31:00.412 lat (msec): min=4, max=172, avg=85.93, stdev=21.41 00:31:00.412 clat percentiles (msec): 00:31:00.412 | 1.00th=[ 16], 5.00th=[ 56], 10.00th=[ 61], 20.00th=[ 69], 00:31:00.412 | 30.00th=[ 80], 40.00th=[ 84], 50.00th=[ 87], 60.00th=[ 89], 00:31:00.412 | 70.00th=[ 92], 80.00th=[ 95], 90.00th=[ 102], 95.00th=[ 124], 00:31:00.412 | 99.00th=[ 155], 99.50th=[ 159], 99.90th=[ 169], 99.95th=[ 174], 00:31:00.412 | 99.99th=[ 174] 00:31:00.412 bw ( KiB/s): min=107222, max=269824, per=9.61%, avg=185277.58, stdev=36737.46, samples=19 00:31:00.412 iops : min= 418, max= 1054, avg=723.63, stdev=143.62, samples=19 00:31:00.412 lat (msec) : 10=0.46%, 20=1.02%, 50=1.44%, 100=85.75%, 250=11.33% 00:31:00.412 cpu : usr=0.29%, sys=3.45%, ctx=1602, majf=0, minf=4097 00:31:00.412 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:31:00.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.412 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:31:00.412 issued rwts: total=7439,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:00.412 latency : target=0, window=0, percentile=100.00%, depth=64 00:31:00.412 job5: (groupid=0, jobs=1): err= 0: pid=66990: Wed Apr 17 08:27:31 2024 00:31:00.412 read: IOPS=1611, BW=403MiB/s (423MB/s)(4073MiB/10107msec) 00:31:00.412 slat (usec): min=16, max=165730, avg=607.82, stdev=2609.56 00:31:00.412 clat (msec): min=8, max=268, avg=39.01, stdev=31.39 00:31:00.412 lat (msec): min=8, max=317, avg=39.62, stdev=31.88 00:31:00.412 clat percentiles (msec): 00:31:00.412 | 1.00th=[ 26], 5.00th=[ 28], 10.00th=[ 29], 20.00th=[ 30], 00:31:00.412 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 32], 60.00th=[ 32], 00:31:00.412 | 70.00th=[ 33], 80.00th=[ 34], 90.00th=[ 43], 95.00th=[ 144], 00:31:00.412 | 99.00th=[ 178], 99.50th=[ 220], 99.90th=[ 249], 99.95th=[ 259], 00:31:00.412 | 99.99th=[ 268] 00:31:00.412 bw ( KiB/s): min=94208, max=535552, per=21.53%, avg=415372.00, stdev=169533.93, samples=20 00:31:00.412 iops : min= 368, max= 2092, avg=1622.40, stdev=662.18, samples=20 00:31:00.412 lat (msec) : 10=0.03%, 20=0.26%, 50=94.01%, 100=0.61%, 250=4.99% 00:31:00.412 lat (msec) : 500=0.09% 00:31:00.412 cpu : usr=0.87%, sys=8.68%, ctx=3642, majf=0, minf=4097 00:31:00.412 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:31:00.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.412 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:31:00.412 issued rwts: total=16291,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:00.413 latency : target=0, window=0, percentile=100.00%, depth=64 00:31:00.413 job6: (groupid=0, jobs=1): err= 0: pid=66991: Wed Apr 17 08:27:31 2024 00:31:00.413 read: IOPS=368, BW=92.2MiB/s (96.7MB/s)(933MiB/10117msec) 00:31:00.413 slat (usec): min=13, max=107184, avg=2700.67, stdev=6981.53 00:31:00.413 clat (msec): min=14, max=273, avg=170.49, stdev=28.60 00:31:00.413 lat (msec): min=31, max=288, avg=173.19, stdev=29.31 00:31:00.413 clat percentiles (msec): 00:31:00.413 | 1.00th=[ 97], 5.00th=[ 111], 10.00th=[ 128], 20.00th=[ 159], 00:31:00.413 | 30.00th=[ 165], 40.00th=[ 169], 50.00th=[ 171], 60.00th=[ 176], 00:31:00.413 | 70.00th=[ 182], 80.00th=[ 192], 90.00th=[ 201], 95.00th=[ 213], 00:31:00.413 | 99.00th=[ 239], 99.50th=[ 247], 99.90th=[ 259], 99.95th=[ 275], 00:31:00.413 | 99.99th=[ 275] 00:31:00.413 bw ( KiB/s): min=75264, max=126464, per=4.86%, avg=93834.05, stdev=12768.97, samples=20 00:31:00.413 iops : min= 294, max= 494, avg=366.50, stdev=49.87, samples=20 00:31:00.413 lat (msec) : 20=0.03%, 50=0.38%, 100=1.23%, 250=98.12%, 500=0.24% 00:31:00.413 cpu : usr=0.25%, sys=2.27%, ctx=1026, majf=0, minf=4097 00:31:00.413 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:31:00.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.413 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:31:00.413 issued rwts: total=3731,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:00.413 latency : target=0, window=0, percentile=100.00%, depth=64 00:31:00.413 job7: (groupid=0, jobs=1): err= 0: pid=66992: Wed Apr 17 08:27:31 2024 00:31:00.413 read: IOPS=519, BW=130MiB/s (136MB/s)(1304MiB/10027msec) 00:31:00.413 slat (usec): min=19, max=59912, avg=1860.18, stdev=5442.47 00:31:00.413 clat (msec): min=3, max=269, avg=120.99, stdev=62.18 00:31:00.413 lat (msec): min=3, max=271, avg=122.85, stdev=63.27 00:31:00.413 clat percentiles (msec): 00:31:00.413 | 1.00th=[ 26], 5.00th=[ 48], 10.00th=[ 53], 20.00th=[ 58], 00:31:00.413 | 30.00th=[ 62], 40.00th=[ 66], 50.00th=[ 125], 60.00th=[ 167], 00:31:00.413 | 70.00th=[ 174], 80.00th=[ 184], 90.00th=[ 197], 95.00th=[ 203], 00:31:00.413 | 99.00th=[ 234], 99.50th=[ 239], 99.90th=[ 245], 99.95th=[ 247], 00:31:00.413 | 99.99th=[ 271] 00:31:00.413 bw ( KiB/s): min=75776, max=309141, per=6.83%, avg=131838.85, stdev=76113.66, samples=20 00:31:00.413 iops : min= 296, max= 1207, avg=514.90, stdev=297.26, samples=20 00:31:00.413 lat (msec) : 4=0.08%, 10=0.17%, 20=0.46%, 50=5.93%, 100=39.14% 00:31:00.413 lat (msec) : 250=54.20%, 500=0.02% 00:31:00.413 cpu : usr=0.34%, sys=2.79%, ctx=1327, majf=0, minf=4097 00:31:00.413 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:31:00.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.413 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:31:00.413 issued rwts: total=5214,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:00.413 latency : target=0, window=0, percentile=100.00%, depth=64 00:31:00.413 job8: (groupid=0, jobs=1): err= 0: pid=66993: Wed Apr 17 08:27:31 2024 00:31:00.413 read: IOPS=501, BW=125MiB/s (132MB/s)(1259MiB/10035msec) 00:31:00.413 slat (usec): min=17, max=73734, avg=1940.43, stdev=5267.52 00:31:00.413 clat (msec): min=10, max=281, avg=125.32, stdev=50.81 00:31:00.413 lat (msec): min=10, max=281, avg=127.26, stdev=51.74 00:31:00.413 clat percentiles (msec): 00:31:00.413 | 1.00th=[ 41], 5.00th=[ 69], 10.00th=[ 75], 20.00th=[ 83], 00:31:00.413 | 30.00th=[ 87], 40.00th=[ 92], 50.00th=[ 99], 60.00th=[ 125], 00:31:00.413 | 70.00th=[ 171], 80.00th=[ 182], 90.00th=[ 197], 95.00th=[ 203], 00:31:00.413 | 99.00th=[ 232], 99.50th=[ 239], 99.90th=[ 262], 99.95th=[ 262], 00:31:00.413 | 99.99th=[ 284] 00:31:00.413 bw ( KiB/s): min=74752, max=203369, per=6.60%, avg=127278.80, stdev=49025.74, samples=20 00:31:00.413 iops : min= 292, max= 794, avg=497.15, stdev=191.48, samples=20 00:31:00.413 lat (msec) : 20=0.30%, 50=1.01%, 100=50.79%, 250=47.72%, 500=0.18% 00:31:00.413 cpu : usr=0.32%, sys=2.40%, ctx=1134, majf=0, minf=4097 00:31:00.413 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:31:00.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.413 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:31:00.413 issued rwts: total=5036,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:00.413 latency : target=0, window=0, percentile=100.00%, depth=64 00:31:00.413 job9: (groupid=0, jobs=1): err= 0: pid=66994: Wed Apr 17 08:27:31 2024 00:31:00.413 read: IOPS=369, BW=92.4MiB/s (96.9MB/s)(934MiB/10107msec) 00:31:00.413 slat (usec): min=18, max=62220, avg=2672.71, stdev=6432.93 00:31:00.413 clat (msec): min=38, max=295, avg=170.12, stdev=29.31 00:31:00.413 lat (msec): min=38, max=295, avg=172.80, stdev=30.07 00:31:00.413 clat percentiles (msec): 00:31:00.413 | 1.00th=[ 65], 5.00th=[ 114], 10.00th=[ 132], 20.00th=[ 159], 00:31:00.413 | 30.00th=[ 165], 40.00th=[ 169], 50.00th=[ 171], 60.00th=[ 176], 00:31:00.413 | 70.00th=[ 182], 80.00th=[ 192], 90.00th=[ 199], 95.00th=[ 211], 00:31:00.413 | 99.00th=[ 239], 99.50th=[ 251], 99.90th=[ 262], 99.95th=[ 264], 00:31:00.413 | 99.99th=[ 296] 00:31:00.413 bw ( KiB/s): min=75776, max=135168, per=4.87%, avg=94017.85, stdev=13410.71, samples=20 00:31:00.413 iops : min= 296, max= 528, avg=367.20, stdev=52.40, samples=20 00:31:00.413 lat (msec) : 50=0.64%, 100=1.79%, 250=97.08%, 500=0.48% 00:31:00.413 cpu : usr=0.12%, sys=1.89%, ctx=903, majf=0, minf=4097 00:31:00.413 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:31:00.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.413 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:31:00.413 issued rwts: total=3737,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:00.413 latency : target=0, window=0, percentile=100.00%, depth=64 00:31:00.413 job10: (groupid=0, jobs=1): err= 0: pid=66995: Wed Apr 17 08:27:31 2024 00:31:00.413 read: IOPS=358, BW=89.6MiB/s (93.9MB/s)(905MiB/10102msec) 00:31:00.413 slat (usec): min=18, max=112080, avg=2699.43, stdev=6636.05 00:31:00.413 clat (msec): min=102, max=258, avg=175.55, stdev=20.45 00:31:00.413 lat (msec): min=102, max=271, avg=178.25, stdev=21.25 00:31:00.413 clat percentiles (msec): 00:31:00.413 | 1.00th=[ 129], 5.00th=[ 146], 10.00th=[ 155], 20.00th=[ 163], 00:31:00.413 | 30.00th=[ 167], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 178], 00:31:00.413 | 70.00th=[ 184], 80.00th=[ 192], 90.00th=[ 201], 95.00th=[ 211], 00:31:00.413 | 99.00th=[ 236], 99.50th=[ 245], 99.90th=[ 255], 99.95th=[ 259], 00:31:00.413 | 99.99th=[ 259] 00:31:00.413 bw ( KiB/s): min=78336, max=112352, per=4.72%, avg=91024.65, stdev=8871.69, samples=20 00:31:00.413 iops : min= 306, max= 438, avg=355.45, stdev=34.56, samples=20 00:31:00.413 lat (msec) : 250=99.75%, 500=0.25% 00:31:00.413 cpu : usr=0.09%, sys=2.13%, ctx=1009, majf=0, minf=4097 00:31:00.413 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:31:00.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.413 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:31:00.413 issued rwts: total=3619,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:00.413 latency : target=0, window=0, percentile=100.00%, depth=64 00:31:00.413 00:31:00.413 Run status group 0 (all jobs): 00:31:00.413 READ: bw=1884MiB/s (1975MB/s), 89.6MiB/s-403MiB/s (93.9MB/s-423MB/s), io=18.6GiB (20.0GB), run=10007-10119msec 00:31:00.413 00:31:00.413 Disk stats (read/write): 00:31:00.413 nvme0n1: ios=14285/0, merge=0/0, ticks=1210180/0, in_queue=1210180, util=98.41% 00:31:00.413 nvme10n1: ios=19054/0, merge=0/0, ticks=1227264/0, in_queue=1227264, util=98.43% 00:31:00.413 nvme1n1: ios=7224/0, merge=0/0, ticks=1228043/0, in_queue=1228043, util=98.55% 00:31:00.413 nvme2n1: ios=20574/0, merge=0/0, ticks=1211292/0, in_queue=1211292, util=98.41% 00:31:00.413 nvme3n1: ios=14217/0, merge=0/0, ticks=1210132/0, in_queue=1210132, util=98.52% 00:31:00.413 nvme4n1: ios=32495/0, merge=0/0, ticks=1227817/0, in_queue=1227817, util=98.68% 00:31:00.413 nvme5n1: ios=7341/0, merge=0/0, ticks=1224004/0, in_queue=1224004, util=98.73% 00:31:00.413 nvme6n1: ios=9852/0, merge=0/0, ticks=1207899/0, in_queue=1207899, util=98.70% 00:31:00.413 nvme7n1: ios=9636/0, merge=0/0, ticks=1208281/0, in_queue=1208281, util=98.89% 00:31:00.413 nvme8n1: ios=7374/0, merge=0/0, ticks=1226901/0, in_queue=1226901, util=98.94% 00:31:00.413 nvme9n1: ios=7141/0, merge=0/0, ticks=1227036/0, in_queue=1227036, util=98.99% 00:31:00.413 08:27:31 -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:31:00.413 [global] 00:31:00.413 thread=1 00:31:00.413 invalidate=1 00:31:00.413 rw=randwrite 00:31:00.413 time_based=1 00:31:00.413 runtime=10 00:31:00.413 ioengine=libaio 00:31:00.413 direct=1 00:31:00.413 bs=262144 00:31:00.413 iodepth=64 00:31:00.413 norandommap=1 00:31:00.413 numjobs=1 00:31:00.413 00:31:00.413 [job0] 00:31:00.413 filename=/dev/nvme0n1 00:31:00.413 [job1] 00:31:00.413 filename=/dev/nvme10n1 00:31:00.413 [job2] 00:31:00.413 filename=/dev/nvme1n1 00:31:00.413 [job3] 00:31:00.413 filename=/dev/nvme2n1 00:31:00.413 [job4] 00:31:00.413 filename=/dev/nvme3n1 00:31:00.413 [job5] 00:31:00.413 filename=/dev/nvme4n1 00:31:00.413 [job6] 00:31:00.413 filename=/dev/nvme5n1 00:31:00.413 [job7] 00:31:00.413 filename=/dev/nvme6n1 00:31:00.413 [job8] 00:31:00.413 filename=/dev/nvme7n1 00:31:00.413 [job9] 00:31:00.413 filename=/dev/nvme8n1 00:31:00.413 [job10] 00:31:00.414 filename=/dev/nvme9n1 00:31:00.414 Could not set queue depth (nvme0n1) 00:31:00.414 Could not set queue depth (nvme10n1) 00:31:00.414 Could not set queue depth (nvme1n1) 00:31:00.414 Could not set queue depth (nvme2n1) 00:31:00.414 Could not set queue depth (nvme3n1) 00:31:00.414 Could not set queue depth (nvme4n1) 00:31:00.414 Could not set queue depth (nvme5n1) 00:31:00.414 Could not set queue depth (nvme6n1) 00:31:00.414 Could not set queue depth (nvme7n1) 00:31:00.414 Could not set queue depth (nvme8n1) 00:31:00.414 Could not set queue depth (nvme9n1) 00:31:00.414 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:31:00.414 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:31:00.414 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:31:00.414 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:31:00.414 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:31:00.414 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:31:00.414 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:31:00.414 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:31:00.414 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:31:00.414 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:31:00.414 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:31:00.414 fio-3.35 00:31:00.414 Starting 11 threads 00:31:10.393 00:31:10.393 job0: (groupid=0, jobs=1): err= 0: pid=67190: Wed Apr 17 08:27:42 2024 00:31:10.393 write: IOPS=245, BW=61.4MiB/s (64.4MB/s)(627MiB/10207msec); 0 zone resets 00:31:10.393 slat (usec): min=16, max=90412, avg=3989.84, stdev=7684.18 00:31:10.393 clat (msec): min=24, max=439, avg=256.46, stdev=38.38 00:31:10.393 lat (msec): min=24, max=440, avg=260.45, stdev=38.19 00:31:10.393 clat percentiles (msec): 00:31:10.393 | 1.00th=[ 77], 5.00th=[ 224], 10.00th=[ 230], 20.00th=[ 239], 00:31:10.393 | 30.00th=[ 245], 40.00th=[ 249], 50.00th=[ 253], 60.00th=[ 259], 00:31:10.393 | 70.00th=[ 271], 80.00th=[ 288], 90.00th=[ 296], 95.00th=[ 305], 00:31:10.393 | 99.00th=[ 347], 99.50th=[ 393], 99.90th=[ 426], 99.95th=[ 439], 00:31:10.393 | 99.99th=[ 439] 00:31:10.393 bw ( KiB/s): min=51200, max=69493, per=4.91%, avg=62527.50, stdev=4869.12, samples=20 00:31:10.393 iops : min= 200, max= 271, avg=244.15, stdev=18.95, samples=20 00:31:10.393 lat (msec) : 50=0.64%, 100=0.80%, 250=41.68%, 500=56.88% 00:31:10.393 cpu : usr=0.57%, sys=1.04%, ctx=2825, majf=0, minf=1 00:31:10.393 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:31:10.393 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.393 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:31:10.393 issued rwts: total=0,2507,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:10.393 latency : target=0, window=0, percentile=100.00%, depth=64 00:31:10.393 job1: (groupid=0, jobs=1): err= 0: pid=67201: Wed Apr 17 08:27:42 2024 00:31:10.393 write: IOPS=892, BW=223MiB/s (234MB/s)(2248MiB/10073msec); 0 zone resets 00:31:10.393 slat (usec): min=19, max=7675, avg=1107.99, stdev=1902.30 00:31:10.393 clat (msec): min=9, max=158, avg=70.58, stdev=16.90 00:31:10.393 lat (msec): min=9, max=158, avg=71.69, stdev=17.09 00:31:10.393 clat percentiles (msec): 00:31:10.393 | 1.00th=[ 49], 5.00th=[ 51], 10.00th=[ 52], 20.00th=[ 53], 00:31:10.393 | 30.00th=[ 54], 40.00th=[ 56], 50.00th=[ 81], 60.00th=[ 83], 00:31:10.393 | 70.00th=[ 85], 80.00th=[ 87], 90.00th=[ 88], 95.00th=[ 89], 00:31:10.393 | 99.00th=[ 90], 99.50th=[ 101], 99.90th=[ 148], 99.95th=[ 155], 00:31:10.393 | 99.99th=[ 159] 00:31:10.393 bw ( KiB/s): min=185715, max=309248, per=17.96%, avg=228529.35, stdev=54628.54, samples=20 00:31:10.393 iops : min= 725, max= 1208, avg=892.60, stdev=213.46, samples=20 00:31:10.393 lat (msec) : 10=0.04%, 20=0.13%, 50=2.45%, 100=96.87%, 250=0.50% 00:31:10.393 cpu : usr=2.05%, sys=3.37%, ctx=11655, majf=0, minf=1 00:31:10.393 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:31:10.393 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.393 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:31:10.393 issued rwts: total=0,8990,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:10.393 latency : target=0, window=0, percentile=100.00%, depth=64 00:31:10.393 job2: (groupid=0, jobs=1): err= 0: pid=67202: Wed Apr 17 08:27:42 2024 00:31:10.393 write: IOPS=240, BW=60.1MiB/s (63.0MB/s)(613MiB/10199msec); 0 zone resets 00:31:10.393 slat (usec): min=19, max=110716, avg=4073.30, stdev=8170.62 00:31:10.393 clat (msec): min=112, max=446, avg=262.02, stdev=31.23 00:31:10.393 lat (msec): min=112, max=446, avg=266.10, stdev=30.60 00:31:10.393 clat percentiles (msec): 00:31:10.393 | 1.00th=[ 184], 5.00th=[ 224], 10.00th=[ 232], 20.00th=[ 241], 00:31:10.393 | 30.00th=[ 249], 40.00th=[ 253], 50.00th=[ 259], 60.00th=[ 264], 00:31:10.393 | 70.00th=[ 268], 80.00th=[ 292], 90.00th=[ 300], 95.00th=[ 309], 00:31:10.393 | 99.00th=[ 351], 99.50th=[ 401], 99.90th=[ 430], 99.95th=[ 447], 00:31:10.393 | 99.99th=[ 447] 00:31:10.393 bw ( KiB/s): min=49152, max=67719, per=4.81%, avg=61167.15, stdev=5492.96, samples=20 00:31:10.393 iops : min= 192, max= 264, avg=238.80, stdev=21.51, samples=20 00:31:10.393 lat (msec) : 250=35.24%, 500=64.76% 00:31:10.393 cpu : usr=0.63%, sys=0.86%, ctx=2326, majf=0, minf=1 00:31:10.393 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.3%, >=64=97.4% 00:31:10.393 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.393 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:31:10.393 issued rwts: total=0,2452,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:10.393 latency : target=0, window=0, percentile=100.00%, depth=64 00:31:10.393 job3: (groupid=0, jobs=1): err= 0: pid=67203: Wed Apr 17 08:27:42 2024 00:31:10.393 write: IOPS=260, BW=65.1MiB/s (68.3MB/s)(665MiB/10207msec); 0 zone resets 00:31:10.393 slat (usec): min=20, max=122037, avg=3659.36, stdev=7279.54 00:31:10.393 clat (msec): min=6, max=450, avg=241.89, stdev=44.88 00:31:10.393 lat (msec): min=6, max=450, avg=245.55, stdev=45.10 00:31:10.393 clat percentiles (msec): 00:31:10.393 | 1.00th=[ 32], 5.00th=[ 201], 10.00th=[ 213], 20.00th=[ 224], 00:31:10.393 | 30.00th=[ 230], 40.00th=[ 234], 50.00th=[ 239], 60.00th=[ 243], 00:31:10.393 | 70.00th=[ 249], 80.00th=[ 279], 90.00th=[ 292], 95.00th=[ 296], 00:31:10.393 | 99.00th=[ 338], 99.50th=[ 405], 99.90th=[ 435], 99.95th=[ 451], 00:31:10.393 | 99.99th=[ 451] 00:31:10.393 bw ( KiB/s): min=55296, max=76288, per=5.22%, avg=66451.05, stdev=6316.93, samples=20 00:31:10.393 iops : min= 216, max= 298, avg=259.55, stdev=24.68, samples=20 00:31:10.393 lat (msec) : 10=0.30%, 20=0.56%, 50=0.45%, 100=0.75%, 250=68.60% 00:31:10.393 lat (msec) : 500=29.33% 00:31:10.393 cpu : usr=0.70%, sys=1.13%, ctx=3136, majf=0, minf=1 00:31:10.393 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:31:10.393 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.393 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:31:10.393 issued rwts: total=0,2659,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:10.393 latency : target=0, window=0, percentile=100.00%, depth=64 00:31:10.393 job4: (groupid=0, jobs=1): err= 0: pid=67204: Wed Apr 17 08:27:42 2024 00:31:10.393 write: IOPS=261, BW=65.5MiB/s (68.7MB/s)(669MiB/10207msec); 0 zone resets 00:31:10.393 slat (usec): min=17, max=65529, avg=3729.31, stdev=6806.61 00:31:10.393 clat (msec): min=6, max=453, avg=240.43, stdev=40.19 00:31:10.393 lat (msec): min=6, max=453, avg=244.16, stdev=40.27 00:31:10.393 clat percentiles (msec): 00:31:10.393 | 1.00th=[ 26], 5.00th=[ 211], 10.00th=[ 222], 20.00th=[ 232], 00:31:10.393 | 30.00th=[ 236], 40.00th=[ 243], 50.00th=[ 247], 60.00th=[ 251], 00:31:10.393 | 70.00th=[ 253], 80.00th=[ 257], 90.00th=[ 262], 95.00th=[ 266], 00:31:10.393 | 99.00th=[ 342], 99.50th=[ 405], 99.90th=[ 439], 99.95th=[ 456], 00:31:10.393 | 99.99th=[ 456] 00:31:10.393 bw ( KiB/s): min=61440, max=86528, per=5.25%, avg=66803.10, stdev=5431.04, samples=20 00:31:10.393 iops : min= 240, max= 338, avg=260.85, stdev=21.29, samples=20 00:31:10.393 lat (msec) : 10=0.11%, 20=0.60%, 50=1.27%, 100=0.45%, 250=55.46% 00:31:10.393 lat (msec) : 500=42.11% 00:31:10.393 cpu : usr=0.63%, sys=0.94%, ctx=2942, majf=0, minf=1 00:31:10.393 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:31:10.393 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.393 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:31:10.393 issued rwts: total=0,2674,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:10.393 latency : target=0, window=0, percentile=100.00%, depth=64 00:31:10.393 job5: (groupid=0, jobs=1): err= 0: pid=67205: Wed Apr 17 08:27:42 2024 00:31:10.393 write: IOPS=255, BW=63.8MiB/s (66.9MB/s)(651MiB/10202msec); 0 zone resets 00:31:10.393 slat (usec): min=19, max=82072, avg=3799.57, stdev=7222.39 00:31:10.393 clat (msec): min=15, max=447, avg=246.72, stdev=40.70 00:31:10.393 lat (msec): min=16, max=448, avg=250.52, stdev=40.72 00:31:10.393 clat percentiles (msec): 00:31:10.393 | 1.00th=[ 49], 5.00th=[ 213], 10.00th=[ 224], 20.00th=[ 232], 00:31:10.393 | 30.00th=[ 239], 40.00th=[ 243], 50.00th=[ 245], 60.00th=[ 251], 00:31:10.393 | 70.00th=[ 257], 80.00th=[ 271], 90.00th=[ 284], 95.00th=[ 292], 00:31:10.393 | 99.00th=[ 338], 99.50th=[ 401], 99.90th=[ 430], 99.95th=[ 447], 00:31:10.393 | 99.99th=[ 447] 00:31:10.393 bw ( KiB/s): min=53248, max=77466, per=5.11%, avg=65063.50, stdev=5387.25, samples=20 00:31:10.393 iops : min= 208, max= 302, avg=254.05, stdev=20.98, samples=20 00:31:10.393 lat (msec) : 20=0.15%, 50=0.92%, 100=1.31%, 250=58.58%, 500=39.04% 00:31:10.393 cpu : usr=0.61%, sys=1.08%, ctx=3317, majf=0, minf=1 00:31:10.393 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:31:10.394 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.394 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:31:10.394 issued rwts: total=0,2605,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:10.394 latency : target=0, window=0, percentile=100.00%, depth=64 00:31:10.394 job6: (groupid=0, jobs=1): err= 0: pid=67206: Wed Apr 17 08:27:42 2024 00:31:10.394 write: IOPS=884, BW=221MiB/s (232MB/s)(2222MiB/10051msec); 0 zone resets 00:31:10.394 slat (usec): min=21, max=28303, avg=1114.87, stdev=1944.76 00:31:10.394 clat (msec): min=4, max=125, avg=71.23, stdev=17.17 00:31:10.394 lat (msec): min=4, max=125, avg=72.35, stdev=17.41 00:31:10.394 clat percentiles (msec): 00:31:10.394 | 1.00th=[ 29], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 52], 00:31:10.394 | 30.00th=[ 54], 40.00th=[ 79], 50.00th=[ 82], 60.00th=[ 83], 00:31:10.394 | 70.00th=[ 85], 80.00th=[ 86], 90.00th=[ 87], 95.00th=[ 88], 00:31:10.394 | 99.00th=[ 93], 99.50th=[ 96], 99.90th=[ 121], 99.95th=[ 121], 00:31:10.394 | 99.99th=[ 126] 00:31:10.394 bw ( KiB/s): min=179559, max=326144, per=17.75%, avg=225806.80, stdev=55143.18, samples=20 00:31:10.394 iops : min= 701, max= 1274, avg=881.95, stdev=215.35, samples=20 00:31:10.394 lat (msec) : 10=0.33%, 20=0.29%, 50=10.40%, 100=88.62%, 250=0.36% 00:31:10.394 cpu : usr=2.13%, sys=2.77%, ctx=12332, majf=0, minf=1 00:31:10.394 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:31:10.394 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.394 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:31:10.394 issued rwts: total=0,8886,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:10.394 latency : target=0, window=0, percentile=100.00%, depth=64 00:31:10.394 job7: (groupid=0, jobs=1): err= 0: pid=67207: Wed Apr 17 08:27:42 2024 00:31:10.394 write: IOPS=249, BW=62.4MiB/s (65.4MB/s)(637MiB/10202msec); 0 zone resets 00:31:10.394 slat (usec): min=22, max=74671, avg=3920.74, stdev=7517.12 00:31:10.394 clat (msec): min=80, max=446, avg=252.23, stdev=33.88 00:31:10.394 lat (msec): min=80, max=446, avg=256.15, stdev=33.55 00:31:10.394 clat percentiles (msec): 00:31:10.394 | 1.00th=[ 148], 5.00th=[ 220], 10.00th=[ 224], 20.00th=[ 232], 00:31:10.394 | 30.00th=[ 236], 40.00th=[ 241], 50.00th=[ 245], 60.00th=[ 251], 00:31:10.394 | 70.00th=[ 259], 80.00th=[ 284], 90.00th=[ 300], 95.00th=[ 309], 00:31:10.394 | 99.00th=[ 351], 99.50th=[ 401], 99.90th=[ 430], 99.95th=[ 447], 00:31:10.394 | 99.99th=[ 447] 00:31:10.394 bw ( KiB/s): min=51200, max=70144, per=5.00%, avg=63595.55, stdev=6090.60, samples=20 00:31:10.394 iops : min= 200, max= 274, avg=248.30, stdev=23.77, samples=20 00:31:10.394 lat (msec) : 100=0.35%, 250=59.80%, 500=39.85% 00:31:10.394 cpu : usr=0.67%, sys=1.09%, ctx=2767, majf=0, minf=1 00:31:10.394 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:31:10.394 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.394 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:31:10.394 issued rwts: total=0,2547,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:10.394 latency : target=0, window=0, percentile=100.00%, depth=64 00:31:10.394 job8: (groupid=0, jobs=1): err= 0: pid=67208: Wed Apr 17 08:27:42 2024 00:31:10.394 write: IOPS=581, BW=145MiB/s (152MB/s)(1464MiB/10075msec); 0 zone resets 00:31:10.394 slat (usec): min=23, max=141728, avg=1663.06, stdev=4107.68 00:31:10.394 clat (msec): min=8, max=336, avg=108.41, stdev=64.19 00:31:10.394 lat (msec): min=9, max=336, avg=110.07, stdev=65.10 00:31:10.394 clat percentiles (msec): 00:31:10.394 | 1.00th=[ 28], 5.00th=[ 79], 10.00th=[ 81], 20.00th=[ 83], 00:31:10.394 | 30.00th=[ 84], 40.00th=[ 85], 50.00th=[ 86], 60.00th=[ 87], 00:31:10.394 | 70.00th=[ 88], 80.00th=[ 90], 90.00th=[ 255], 95.00th=[ 275], 00:31:10.394 | 99.00th=[ 296], 99.50th=[ 300], 99.90th=[ 330], 99.95th=[ 338], 00:31:10.394 | 99.99th=[ 338] 00:31:10.394 bw ( KiB/s): min=43008, max=199680, per=11.65%, avg=148218.85, stdev=61750.39, samples=20 00:31:10.394 iops : min= 168, max= 780, avg=578.90, stdev=241.16, samples=20 00:31:10.394 lat (msec) : 10=0.07%, 20=0.48%, 50=1.76%, 100=83.04%, 250=3.96% 00:31:10.394 lat (msec) : 500=10.69% 00:31:10.394 cpu : usr=1.40%, sys=2.24%, ctx=7649, majf=0, minf=1 00:31:10.394 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:31:10.394 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.394 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:31:10.394 issued rwts: total=0,5855,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:10.394 latency : target=0, window=0, percentile=100.00%, depth=64 00:31:10.394 job9: (groupid=0, jobs=1): err= 0: pid=67209: Wed Apr 17 08:27:42 2024 00:31:10.394 write: IOPS=897, BW=224MiB/s (235MB/s)(2254MiB/10047msec); 0 zone resets 00:31:10.394 slat (usec): min=19, max=21117, avg=1105.90, stdev=1929.15 00:31:10.394 clat (msec): min=23, max=114, avg=70.21, stdev=17.58 00:31:10.394 lat (msec): min=23, max=114, avg=71.32, stdev=17.78 00:31:10.394 clat percentiles (msec): 00:31:10.394 | 1.00th=[ 45], 5.00th=[ 46], 10.00th=[ 47], 20.00th=[ 49], 00:31:10.394 | 30.00th=[ 50], 40.00th=[ 79], 50.00th=[ 82], 60.00th=[ 83], 00:31:10.394 | 70.00th=[ 85], 80.00th=[ 86], 90.00th=[ 87], 95.00th=[ 88], 00:31:10.394 | 99.00th=[ 92], 99.50th=[ 95], 99.90th=[ 104], 99.95th=[ 110], 00:31:10.394 | 99.99th=[ 115] 00:31:10.394 bw ( KiB/s): min=185202, max=346624, per=18.01%, avg=229149.25, stdev=62475.00, samples=20 00:31:10.394 iops : min= 723, max= 1354, avg=895.00, stdev=243.96, samples=20 00:31:10.394 lat (msec) : 50=31.52%, 100=68.36%, 250=0.12% 00:31:10.394 cpu : usr=2.00%, sys=2.87%, ctx=10241, majf=0, minf=1 00:31:10.394 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:31:10.394 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.394 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:31:10.394 issued rwts: total=0,9014,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:10.394 latency : target=0, window=0, percentile=100.00%, depth=64 00:31:10.394 job10: (groupid=0, jobs=1): err= 0: pid=67210: Wed Apr 17 08:27:42 2024 00:31:10.394 write: IOPS=248, BW=62.1MiB/s (65.1MB/s)(634MiB/10205msec); 0 zone resets 00:31:10.394 slat (usec): min=15, max=76335, avg=3942.54, stdev=7573.34 00:31:10.394 clat (msec): min=50, max=448, avg=253.67, stdev=36.21 00:31:10.394 lat (msec): min=50, max=448, avg=257.62, stdev=35.96 00:31:10.394 clat percentiles (msec): 00:31:10.394 | 1.00th=[ 106], 5.00th=[ 220], 10.00th=[ 228], 20.00th=[ 236], 00:31:10.394 | 30.00th=[ 241], 40.00th=[ 245], 50.00th=[ 249], 60.00th=[ 253], 00:31:10.394 | 70.00th=[ 262], 80.00th=[ 288], 90.00th=[ 296], 95.00th=[ 305], 00:31:10.394 | 99.00th=[ 355], 99.50th=[ 401], 99.90th=[ 435], 99.95th=[ 447], 00:31:10.394 | 99.99th=[ 451] 00:31:10.394 bw ( KiB/s): min=51200, max=70144, per=4.97%, avg=63270.35, stdev=5529.59, samples=20 00:31:10.394 iops : min= 200, max= 274, avg=247.05, stdev=21.55, samples=20 00:31:10.394 lat (msec) : 100=0.95%, 250=52.57%, 500=46.49% 00:31:10.394 cpu : usr=0.74%, sys=0.77%, ctx=2453, majf=0, minf=1 00:31:10.394 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:31:10.394 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.394 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:31:10.394 issued rwts: total=0,2534,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:10.394 latency : target=0, window=0, percentile=100.00%, depth=64 00:31:10.394 00:31:10.394 Run status group 0 (all jobs): 00:31:10.394 WRITE: bw=1242MiB/s (1303MB/s), 60.1MiB/s-224MiB/s (63.0MB/s-235MB/s), io=12.4GiB (13.3GB), run=10047-10207msec 00:31:10.394 00:31:10.394 Disk stats (read/write): 00:31:10.394 nvme0n1: ios=49/4892, merge=0/0, ticks=44/1208292, in_queue=1208336, util=98.26% 00:31:10.394 nvme10n1: ios=45/17876, merge=0/0, ticks=33/1218954, in_queue=1218987, util=98.37% 00:31:10.394 nvme1n1: ios=23/4786, merge=0/0, ticks=30/1207426, in_queue=1207456, util=98.27% 00:31:10.394 nvme2n1: ios=0/5207, merge=0/0, ticks=0/1211728, in_queue=1211728, util=98.46% 00:31:10.394 nvme3n1: ios=0/5227, merge=0/0, ticks=0/1209200, in_queue=1209200, util=98.36% 00:31:10.394 nvme4n1: ios=20/5095, merge=0/0, ticks=25/1209682, in_queue=1209707, util=98.58% 00:31:10.394 nvme5n1: ios=0/17680, merge=0/0, ticks=0/1221199, in_queue=1221199, util=98.56% 00:31:10.394 nvme6n1: ios=0/4975, merge=0/0, ticks=0/1208200, in_queue=1208200, util=98.55% 00:31:10.394 nvme7n1: ios=0/11612, merge=0/0, ticks=0/1220179, in_queue=1220179, util=98.76% 00:31:10.394 nvme8n1: ios=0/17928, merge=0/0, ticks=0/1220764, in_queue=1220764, util=98.80% 00:31:10.394 nvme9n1: ios=0/4950, merge=0/0, ticks=0/1208815, in_queue=1208815, util=98.84% 00:31:10.394 08:27:42 -- target/multiconnection.sh@36 -- # sync 00:31:10.394 08:27:42 -- target/multiconnection.sh@37 -- # seq 1 11 00:31:10.394 08:27:42 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:31:10.394 08:27:42 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:10.394 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:10.394 08:27:42 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:31:10.394 08:27:42 -- common/autotest_common.sh@1198 -- # local i=0 00:31:10.394 08:27:42 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:31:10.394 08:27:42 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK1 00:31:10.394 08:27:42 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:31:10.394 08:27:42 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK1 00:31:10.394 08:27:42 -- common/autotest_common.sh@1210 -- # return 0 00:31:10.394 08:27:42 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:10.394 08:27:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:10.394 08:27:42 -- common/autotest_common.sh@10 -- # set +x 00:31:10.394 08:27:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:10.394 08:27:42 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:31:10.394 08:27:42 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:31:10.394 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:31:10.394 08:27:42 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:31:10.394 08:27:42 -- common/autotest_common.sh@1198 -- # local i=0 00:31:10.394 08:27:42 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:31:10.394 08:27:42 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK2 00:31:10.394 08:27:42 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:31:10.394 08:27:42 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK2 00:31:10.394 08:27:42 -- common/autotest_common.sh@1210 -- # return 0 00:31:10.394 08:27:42 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:10.394 08:27:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:10.394 08:27:42 -- common/autotest_common.sh@10 -- # set +x 00:31:10.394 08:27:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:10.394 08:27:42 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:31:10.395 08:27:42 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:31:10.395 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:31:10.395 08:27:42 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:31:10.395 08:27:42 -- common/autotest_common.sh@1198 -- # local i=0 00:31:10.395 08:27:42 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:31:10.395 08:27:42 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK3 00:31:10.395 08:27:43 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:31:10.395 08:27:43 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK3 00:31:10.395 08:27:43 -- common/autotest_common.sh@1210 -- # return 0 00:31:10.395 08:27:43 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:31:10.395 08:27:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:10.395 08:27:43 -- common/autotest_common.sh@10 -- # set +x 00:31:10.395 08:27:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:10.395 08:27:43 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:31:10.395 08:27:43 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:31:10.395 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:31:10.395 08:27:43 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:31:10.395 08:27:43 -- common/autotest_common.sh@1198 -- # local i=0 00:31:10.395 08:27:43 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:31:10.395 08:27:43 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK4 00:31:10.395 08:27:43 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:31:10.395 08:27:43 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK4 00:31:10.395 08:27:43 -- common/autotest_common.sh@1210 -- # return 0 00:31:10.395 08:27:43 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:31:10.395 08:27:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:10.395 08:27:43 -- common/autotest_common.sh@10 -- # set +x 00:31:10.395 08:27:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:10.395 08:27:43 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:31:10.395 08:27:43 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:31:10.395 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:31:10.395 08:27:43 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:31:10.395 08:27:43 -- common/autotest_common.sh@1198 -- # local i=0 00:31:10.395 08:27:43 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:31:10.395 08:27:43 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK5 00:31:10.395 08:27:43 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:31:10.395 08:27:43 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK5 00:31:10.395 08:27:43 -- common/autotest_common.sh@1210 -- # return 0 00:31:10.395 08:27:43 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:31:10.395 08:27:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:10.395 08:27:43 -- common/autotest_common.sh@10 -- # set +x 00:31:10.395 08:27:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:10.395 08:27:43 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:31:10.395 08:27:43 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:31:10.395 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:31:10.395 08:27:43 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:31:10.395 08:27:43 -- common/autotest_common.sh@1198 -- # local i=0 00:31:10.395 08:27:43 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:31:10.395 08:27:43 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK6 00:31:10.395 08:27:43 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:31:10.395 08:27:43 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK6 00:31:10.395 08:27:43 -- common/autotest_common.sh@1210 -- # return 0 00:31:10.395 08:27:43 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:31:10.395 08:27:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:10.395 08:27:43 -- common/autotest_common.sh@10 -- # set +x 00:31:10.395 08:27:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:10.395 08:27:43 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:31:10.395 08:27:43 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:31:10.395 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:31:10.395 08:27:43 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:31:10.395 08:27:43 -- common/autotest_common.sh@1198 -- # local i=0 00:31:10.395 08:27:43 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:31:10.395 08:27:43 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK7 00:31:10.395 08:27:43 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:31:10.395 08:27:43 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK7 00:31:10.395 08:27:43 -- common/autotest_common.sh@1210 -- # return 0 00:31:10.395 08:27:43 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:31:10.395 08:27:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:10.395 08:27:43 -- common/autotest_common.sh@10 -- # set +x 00:31:10.395 08:27:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:10.395 08:27:43 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:31:10.395 08:27:43 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:31:10.395 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:31:10.395 08:27:43 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:31:10.395 08:27:43 -- common/autotest_common.sh@1198 -- # local i=0 00:31:10.395 08:27:43 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:31:10.395 08:27:43 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK8 00:31:10.395 08:27:43 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:31:10.395 08:27:43 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK8 00:31:10.395 08:27:43 -- common/autotest_common.sh@1210 -- # return 0 00:31:10.395 08:27:43 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:31:10.395 08:27:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:10.395 08:27:43 -- common/autotest_common.sh@10 -- # set +x 00:31:10.395 08:27:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:10.395 08:27:43 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:31:10.395 08:27:43 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:31:10.395 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:31:10.395 08:27:43 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:31:10.395 08:27:43 -- common/autotest_common.sh@1198 -- # local i=0 00:31:10.395 08:27:43 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:31:10.395 08:27:43 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK9 00:31:10.395 08:27:43 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK9 00:31:10.395 08:27:43 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:31:10.395 08:27:43 -- common/autotest_common.sh@1210 -- # return 0 00:31:10.395 08:27:43 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:31:10.395 08:27:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:10.395 08:27:43 -- common/autotest_common.sh@10 -- # set +x 00:31:10.395 08:27:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:10.395 08:27:43 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:31:10.395 08:27:43 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:31:10.655 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:31:10.655 08:27:43 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:31:10.655 08:27:43 -- common/autotest_common.sh@1198 -- # local i=0 00:31:10.655 08:27:43 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:31:10.655 08:27:43 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK10 00:31:10.655 08:27:43 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:31:10.655 08:27:43 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK10 00:31:10.655 08:27:43 -- common/autotest_common.sh@1210 -- # return 0 00:31:10.655 08:27:43 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:31:10.655 08:27:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:10.655 08:27:43 -- common/autotest_common.sh@10 -- # set +x 00:31:10.655 08:27:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:10.655 08:27:43 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:31:10.655 08:27:43 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:31:10.655 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:31:10.655 08:27:43 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:31:10.655 08:27:43 -- common/autotest_common.sh@1198 -- # local i=0 00:31:10.655 08:27:43 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:31:10.655 08:27:43 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK11 00:31:10.655 08:27:43 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:31:10.655 08:27:43 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK11 00:31:10.655 08:27:43 -- common/autotest_common.sh@1210 -- # return 0 00:31:10.655 08:27:43 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:31:10.655 08:27:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:10.655 08:27:43 -- common/autotest_common.sh@10 -- # set +x 00:31:10.655 08:27:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:10.655 08:27:43 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:31:10.655 08:27:43 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:31:10.655 08:27:43 -- target/multiconnection.sh@47 -- # nvmftestfini 00:31:10.655 08:27:43 -- nvmf/common.sh@476 -- # nvmfcleanup 00:31:10.655 08:27:43 -- nvmf/common.sh@116 -- # sync 00:31:10.655 08:27:43 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:31:10.655 08:27:43 -- nvmf/common.sh@119 -- # set +e 00:31:10.655 08:27:43 -- nvmf/common.sh@120 -- # for i in {1..20} 00:31:10.655 08:27:43 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:31:10.655 rmmod nvme_tcp 00:31:10.655 rmmod nvme_fabrics 00:31:10.655 rmmod nvme_keyring 00:31:10.655 08:27:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:31:10.655 08:27:43 -- nvmf/common.sh@123 -- # set -e 00:31:10.655 08:27:43 -- nvmf/common.sh@124 -- # return 0 00:31:10.655 08:27:43 -- nvmf/common.sh@477 -- # '[' -n 66525 ']' 00:31:10.655 08:27:43 -- nvmf/common.sh@478 -- # killprocess 66525 00:31:10.655 08:27:43 -- common/autotest_common.sh@926 -- # '[' -z 66525 ']' 00:31:10.655 08:27:43 -- common/autotest_common.sh@930 -- # kill -0 66525 00:31:10.655 08:27:43 -- common/autotest_common.sh@931 -- # uname 00:31:10.655 08:27:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:10.655 08:27:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 66525 00:31:10.655 killing process with pid 66525 00:31:10.655 08:27:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:10.655 08:27:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:10.655 08:27:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 66525' 00:31:10.655 08:27:43 -- common/autotest_common.sh@945 -- # kill 66525 00:31:10.655 08:27:43 -- common/autotest_common.sh@950 -- # wait 66525 00:31:11.222 08:27:44 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:31:11.222 08:27:44 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:31:11.222 08:27:44 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:31:11.222 08:27:44 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:11.222 08:27:44 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:31:11.222 08:27:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:11.222 08:27:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:11.222 08:27:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:11.222 08:27:44 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:31:11.222 00:31:11.222 real 0m49.463s 00:31:11.222 user 2m49.556s 00:31:11.222 sys 0m30.649s 00:31:11.222 08:27:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:11.222 08:27:44 -- common/autotest_common.sh@10 -- # set +x 00:31:11.222 ************************************ 00:31:11.222 END TEST nvmf_multiconnection 00:31:11.222 ************************************ 00:31:11.481 08:27:44 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:31:11.481 08:27:44 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:31:11.481 08:27:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:11.481 08:27:44 -- common/autotest_common.sh@10 -- # set +x 00:31:11.481 ************************************ 00:31:11.481 START TEST nvmf_initiator_timeout 00:31:11.481 ************************************ 00:31:11.481 08:27:44 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:31:11.481 * Looking for test storage... 00:31:11.481 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:31:11.481 08:27:44 -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:11.481 08:27:44 -- nvmf/common.sh@7 -- # uname -s 00:31:11.481 08:27:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:11.481 08:27:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:11.481 08:27:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:11.481 08:27:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:11.481 08:27:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:11.481 08:27:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:11.481 08:27:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:11.481 08:27:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:11.481 08:27:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:11.481 08:27:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:11.481 08:27:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ce38300-f67f-48af-81f9-d51a7c54746d 00:31:11.481 08:27:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ce38300-f67f-48af-81f9-d51a7c54746d 00:31:11.481 08:27:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:11.481 08:27:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:11.481 08:27:44 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:31:11.481 08:27:44 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:11.481 08:27:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:11.481 08:27:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:11.481 08:27:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:11.481 08:27:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.481 08:27:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.481 08:27:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.481 08:27:44 -- paths/export.sh@5 -- # export PATH 00:31:11.481 08:27:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.481 08:27:44 -- nvmf/common.sh@46 -- # : 0 00:31:11.481 08:27:44 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:31:11.481 08:27:44 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:31:11.481 08:27:44 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:31:11.481 08:27:44 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:11.481 08:27:44 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:11.481 08:27:44 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:31:11.481 08:27:44 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:31:11.481 08:27:44 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:31:11.481 08:27:44 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:11.481 08:27:44 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:11.481 08:27:44 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:31:11.481 08:27:44 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:31:11.481 08:27:44 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:11.481 08:27:44 -- nvmf/common.sh@436 -- # prepare_net_devs 00:31:11.481 08:27:44 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:31:11.481 08:27:44 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:31:11.481 08:27:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:11.481 08:27:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:11.481 08:27:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:11.481 08:27:44 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:31:11.481 08:27:44 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:31:11.481 08:27:44 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:31:11.481 08:27:44 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:31:11.481 08:27:44 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:31:11.481 08:27:44 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:31:11.482 08:27:44 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:11.482 08:27:44 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:11.482 08:27:44 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:31:11.482 08:27:44 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:31:11.482 08:27:44 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:31:11.482 08:27:44 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:31:11.482 08:27:44 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:31:11.482 08:27:44 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:11.482 08:27:44 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:31:11.482 08:27:44 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:31:11.482 08:27:44 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:31:11.482 08:27:44 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:31:11.482 08:27:44 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:31:11.482 08:27:44 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:31:11.482 Cannot find device "nvmf_tgt_br" 00:31:11.482 08:27:44 -- nvmf/common.sh@154 -- # true 00:31:11.482 08:27:44 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:31:11.482 Cannot find device "nvmf_tgt_br2" 00:31:11.482 08:27:44 -- nvmf/common.sh@155 -- # true 00:31:11.482 08:27:44 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:31:11.482 08:27:44 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:31:11.482 Cannot find device "nvmf_tgt_br" 00:31:11.482 08:27:44 -- nvmf/common.sh@157 -- # true 00:31:11.482 08:27:44 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:31:11.482 Cannot find device "nvmf_tgt_br2" 00:31:11.482 08:27:44 -- nvmf/common.sh@158 -- # true 00:31:11.482 08:27:44 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:31:11.740 08:27:44 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:31:11.740 08:27:44 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:11.740 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:11.740 08:27:44 -- nvmf/common.sh@161 -- # true 00:31:11.740 08:27:44 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:11.740 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:11.740 08:27:44 -- nvmf/common.sh@162 -- # true 00:31:11.740 08:27:44 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:31:11.740 08:27:44 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:31:11.740 08:27:44 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:31:11.740 08:27:44 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:31:11.740 08:27:44 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:31:11.740 08:27:44 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:31:11.740 08:27:44 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:31:11.740 08:27:44 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:31:11.740 08:27:44 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:31:11.740 08:27:44 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:31:11.740 08:27:44 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:31:11.740 08:27:45 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:31:11.740 08:27:45 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:31:11.740 08:27:45 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:31:11.740 08:27:45 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:31:11.740 08:27:45 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:31:11.740 08:27:45 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:31:11.740 08:27:45 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:31:11.740 08:27:45 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:31:11.740 08:27:45 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:31:11.740 08:27:45 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:31:11.740 08:27:45 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:31:11.740 08:27:45 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:31:11.740 08:27:45 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:31:11.740 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:11.740 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:31:11.740 00:31:11.740 --- 10.0.0.2 ping statistics --- 00:31:11.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:11.740 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:31:11.740 08:27:45 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:31:11.740 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:31:11.740 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:31:11.740 00:31:11.740 --- 10.0.0.3 ping statistics --- 00:31:11.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:11.740 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:31:11.740 08:27:45 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:31:11.998 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:11.998 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:31:11.998 00:31:11.998 --- 10.0.0.1 ping statistics --- 00:31:11.998 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:11.998 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:31:11.998 08:27:45 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:11.998 08:27:45 -- nvmf/common.sh@421 -- # return 0 00:31:11.998 08:27:45 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:31:11.998 08:27:45 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:11.998 08:27:45 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:31:11.998 08:27:45 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:31:11.998 08:27:45 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:11.999 08:27:45 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:31:11.999 08:27:45 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:31:11.999 08:27:45 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:31:11.999 08:27:45 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:31:11.999 08:27:45 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:11.999 08:27:45 -- common/autotest_common.sh@10 -- # set +x 00:31:11.999 08:27:45 -- nvmf/common.sh@469 -- # nvmfpid=67584 00:31:11.999 08:27:45 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:11.999 08:27:45 -- nvmf/common.sh@470 -- # waitforlisten 67584 00:31:11.999 08:27:45 -- common/autotest_common.sh@819 -- # '[' -z 67584 ']' 00:31:11.999 08:27:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:11.999 08:27:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:11.999 08:27:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:11.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:11.999 08:27:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:11.999 08:27:45 -- common/autotest_common.sh@10 -- # set +x 00:31:11.999 [2024-04-17 08:27:45.167963] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:31:11.999 [2024-04-17 08:27:45.168034] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:11.999 [2024-04-17 08:27:45.306081] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:12.263 [2024-04-17 08:27:45.402485] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:12.263 [2024-04-17 08:27:45.402615] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:12.263 [2024-04-17 08:27:45.402623] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:12.263 [2024-04-17 08:27:45.402628] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:12.263 [2024-04-17 08:27:45.402836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:12.263 [2024-04-17 08:27:45.403426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:12.263 [2024-04-17 08:27:45.403494] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:12.263 [2024-04-17 08:27:45.403496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:31:12.829 08:27:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:12.829 08:27:46 -- common/autotest_common.sh@852 -- # return 0 00:31:12.829 08:27:46 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:31:12.829 08:27:46 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:12.829 08:27:46 -- common/autotest_common.sh@10 -- # set +x 00:31:12.829 08:27:46 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:12.829 08:27:46 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:31:12.829 08:27:46 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:12.830 08:27:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:12.830 08:27:46 -- common/autotest_common.sh@10 -- # set +x 00:31:12.830 Malloc0 00:31:12.830 08:27:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:12.830 08:27:46 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:31:12.830 08:27:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:12.830 08:27:46 -- common/autotest_common.sh@10 -- # set +x 00:31:12.830 Delay0 00:31:12.830 08:27:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:12.830 08:27:46 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:12.830 08:27:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:12.830 08:27:46 -- common/autotest_common.sh@10 -- # set +x 00:31:12.830 [2024-04-17 08:27:46.092846] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:12.830 08:27:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:12.830 08:27:46 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:12.830 08:27:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:12.830 08:27:46 -- common/autotest_common.sh@10 -- # set +x 00:31:12.830 08:27:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:12.830 08:27:46 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:12.830 08:27:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:12.830 08:27:46 -- common/autotest_common.sh@10 -- # set +x 00:31:12.830 08:27:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:12.830 08:27:46 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:12.830 08:27:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:12.830 08:27:46 -- common/autotest_common.sh@10 -- # set +x 00:31:12.830 [2024-04-17 08:27:46.132934] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:12.830 08:27:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:12.830 08:27:46 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2ce38300-f67f-48af-81f9-d51a7c54746d --hostid=2ce38300-f67f-48af-81f9-d51a7c54746d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:31:13.088 08:27:46 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:31:13.088 08:27:46 -- common/autotest_common.sh@1177 -- # local i=0 00:31:13.088 08:27:46 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:31:13.088 08:27:46 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:31:13.088 08:27:46 -- common/autotest_common.sh@1184 -- # sleep 2 00:31:14.988 08:27:48 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:31:14.988 08:27:48 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:31:14.988 08:27:48 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:31:14.988 08:27:48 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:31:14.988 08:27:48 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:31:14.988 08:27:48 -- common/autotest_common.sh@1187 -- # return 0 00:31:14.989 08:27:48 -- target/initiator_timeout.sh@35 -- # fio_pid=67649 00:31:14.989 08:27:48 -- target/initiator_timeout.sh@37 -- # sleep 3 00:31:14.989 08:27:48 -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:31:14.989 [global] 00:31:14.989 thread=1 00:31:14.989 invalidate=1 00:31:14.989 rw=write 00:31:14.989 time_based=1 00:31:14.989 runtime=60 00:31:14.989 ioengine=libaio 00:31:14.989 direct=1 00:31:14.989 bs=4096 00:31:14.989 iodepth=1 00:31:14.989 norandommap=0 00:31:14.989 numjobs=1 00:31:14.989 00:31:14.989 verify_dump=1 00:31:14.989 verify_backlog=512 00:31:14.989 verify_state_save=0 00:31:14.989 do_verify=1 00:31:14.989 verify=crc32c-intel 00:31:15.248 [job0] 00:31:15.248 filename=/dev/nvme0n1 00:31:15.248 Could not set queue depth (nvme0n1) 00:31:15.248 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:15.248 fio-3.35 00:31:15.248 Starting 1 thread 00:31:18.533 08:27:51 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:31:18.533 08:27:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:18.533 08:27:51 -- common/autotest_common.sh@10 -- # set +x 00:31:18.533 true 00:31:18.533 08:27:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:18.533 08:27:51 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:31:18.533 08:27:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:18.533 08:27:51 -- common/autotest_common.sh@10 -- # set +x 00:31:18.533 true 00:31:18.534 08:27:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:18.534 08:27:51 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:31:18.534 08:27:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:18.534 08:27:51 -- common/autotest_common.sh@10 -- # set +x 00:31:18.534 true 00:31:18.534 08:27:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:18.534 08:27:51 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:31:18.534 08:27:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:18.534 08:27:51 -- common/autotest_common.sh@10 -- # set +x 00:31:18.534 true 00:31:18.534 08:27:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:18.534 08:27:51 -- target/initiator_timeout.sh@45 -- # sleep 3 00:31:21.068 08:27:54 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:31:21.068 08:27:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:21.068 08:27:54 -- common/autotest_common.sh@10 -- # set +x 00:31:21.068 true 00:31:21.068 08:27:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:21.068 08:27:54 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:31:21.068 08:27:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:21.068 08:27:54 -- common/autotest_common.sh@10 -- # set +x 00:31:21.068 true 00:31:21.068 08:27:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:21.068 08:27:54 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:31:21.068 08:27:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:21.068 08:27:54 -- common/autotest_common.sh@10 -- # set +x 00:31:21.068 true 00:31:21.068 08:27:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:21.068 08:27:54 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:31:21.068 08:27:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:21.068 08:27:54 -- common/autotest_common.sh@10 -- # set +x 00:31:21.068 true 00:31:21.068 08:27:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:21.068 08:27:54 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:31:21.068 08:27:54 -- target/initiator_timeout.sh@54 -- # wait 67649 00:32:17.288 00:32:17.288 job0: (groupid=0, jobs=1): err= 0: pid=67670: Wed Apr 17 08:28:48 2024 00:32:17.288 read: IOPS=938, BW=3755KiB/s (3845kB/s)(220MiB/60000msec) 00:32:17.288 slat (usec): min=5, max=15806, avg= 9.02, stdev=82.76 00:32:17.288 clat (usec): min=123, max=40701k, avg=903.39, stdev=171504.05 00:32:17.288 lat (usec): min=129, max=40701k, avg=912.41, stdev=171504.06 00:32:17.288 clat percentiles (usec): 00:32:17.288 | 1.00th=[ 149], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 167], 00:32:17.288 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 184], 00:32:17.288 | 70.00th=[ 188], 80.00th=[ 194], 90.00th=[ 202], 95.00th=[ 210], 00:32:17.288 | 99.00th=[ 229], 99.50th=[ 237], 99.90th=[ 273], 99.95th=[ 437], 00:32:17.288 | 99.99th=[ 1713] 00:32:17.288 write: IOPS=944, BW=3779KiB/s (3870kB/s)(221MiB/60000msec); 0 zone resets 00:32:17.288 slat (usec): min=7, max=722, avg=13.12, stdev= 5.59 00:32:17.288 clat (usec): min=100, max=3875, avg=136.62, stdev=31.20 00:32:17.288 lat (usec): min=110, max=3897, avg=149.75, stdev=32.03 00:32:17.288 clat percentiles (usec): 00:32:17.288 | 1.00th=[ 113], 5.00th=[ 118], 10.00th=[ 121], 20.00th=[ 125], 00:32:17.288 | 30.00th=[ 129], 40.00th=[ 133], 50.00th=[ 135], 60.00th=[ 139], 00:32:17.288 | 70.00th=[ 143], 80.00th=[ 147], 90.00th=[ 155], 95.00th=[ 161], 00:32:17.288 | 99.00th=[ 178], 99.50th=[ 184], 99.90th=[ 217], 99.95th=[ 262], 00:32:17.288 | 99.99th=[ 1221] 00:32:17.288 bw ( KiB/s): min= 4096, max=12288, per=100.00%, avg=11339.74, stdev=1831.14, samples=39 00:32:17.288 iops : min= 1024, max= 3072, avg=2834.97, stdev=457.80, samples=39 00:32:17.288 lat (usec) : 250=99.84%, 500=0.13%, 750=0.01%, 1000=0.01% 00:32:17.288 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, >=2000=0.01% 00:32:17.288 cpu : usr=0.39%, sys=1.60%, ctx=113021, majf=0, minf=2 00:32:17.288 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:17.288 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.288 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.288 issued rwts: total=56320,56687,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:17.288 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:17.288 00:32:17.288 Run status group 0 (all jobs): 00:32:17.288 READ: bw=3755KiB/s (3845kB/s), 3755KiB/s-3755KiB/s (3845kB/s-3845kB/s), io=220MiB (231MB), run=60000-60000msec 00:32:17.288 WRITE: bw=3779KiB/s (3870kB/s), 3779KiB/s-3779KiB/s (3870kB/s-3870kB/s), io=221MiB (232MB), run=60000-60000msec 00:32:17.288 00:32:17.288 Disk stats (read/write): 00:32:17.288 nvme0n1: ios=56549/56320, merge=0/0, ticks=10354/7991, in_queue=18345, util=99.87% 00:32:17.288 08:28:48 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:17.288 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:17.288 08:28:48 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:17.288 08:28:48 -- common/autotest_common.sh@1198 -- # local i=0 00:32:17.288 08:28:48 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:32:17.288 08:28:48 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:17.288 08:28:48 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:32:17.288 08:28:48 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:17.288 08:28:48 -- common/autotest_common.sh@1210 -- # return 0 00:32:17.288 08:28:48 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:32:17.288 nvmf hotplug test: fio successful as expected 00:32:17.288 08:28:48 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:32:17.288 08:28:48 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:17.288 08:28:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:17.288 08:28:48 -- common/autotest_common.sh@10 -- # set +x 00:32:17.288 08:28:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:17.288 08:28:48 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:32:17.288 08:28:48 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:32:17.288 08:28:48 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:32:17.288 08:28:48 -- nvmf/common.sh@476 -- # nvmfcleanup 00:32:17.288 08:28:48 -- nvmf/common.sh@116 -- # sync 00:32:17.288 08:28:48 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:32:17.288 08:28:48 -- nvmf/common.sh@119 -- # set +e 00:32:17.288 08:28:48 -- nvmf/common.sh@120 -- # for i in {1..20} 00:32:17.288 08:28:48 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:32:17.288 rmmod nvme_tcp 00:32:17.288 rmmod nvme_fabrics 00:32:17.288 rmmod nvme_keyring 00:32:17.288 08:28:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:32:17.288 08:28:48 -- nvmf/common.sh@123 -- # set -e 00:32:17.288 08:28:48 -- nvmf/common.sh@124 -- # return 0 00:32:17.288 08:28:48 -- nvmf/common.sh@477 -- # '[' -n 67584 ']' 00:32:17.288 08:28:48 -- nvmf/common.sh@478 -- # killprocess 67584 00:32:17.288 08:28:48 -- common/autotest_common.sh@926 -- # '[' -z 67584 ']' 00:32:17.288 08:28:48 -- common/autotest_common.sh@930 -- # kill -0 67584 00:32:17.288 08:28:48 -- common/autotest_common.sh@931 -- # uname 00:32:17.288 08:28:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:17.288 08:28:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 67584 00:32:17.288 08:28:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:32:17.288 08:28:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:32:17.288 killing process with pid 67584 00:32:17.288 08:28:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 67584' 00:32:17.288 08:28:48 -- common/autotest_common.sh@945 -- # kill 67584 00:32:17.288 08:28:48 -- common/autotest_common.sh@950 -- # wait 67584 00:32:17.288 08:28:48 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:32:17.288 08:28:48 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:32:17.288 08:28:48 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:32:17.288 08:28:48 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:17.288 08:28:48 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:32:17.288 08:28:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:17.288 08:28:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:17.288 08:28:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:17.288 08:28:49 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:32:17.288 00:32:17.288 real 1m4.442s 00:32:17.288 user 4m0.334s 00:32:17.288 sys 0m14.224s 00:32:17.288 08:28:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:17.288 08:28:49 -- common/autotest_common.sh@10 -- # set +x 00:32:17.288 ************************************ 00:32:17.288 END TEST nvmf_initiator_timeout 00:32:17.288 ************************************ 00:32:17.288 08:28:49 -- nvmf/nvmf.sh@69 -- # [[ virt == phy ]] 00:32:17.288 08:28:49 -- nvmf/nvmf.sh@85 -- # timing_exit target 00:32:17.288 08:28:49 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:17.288 08:28:49 -- common/autotest_common.sh@10 -- # set +x 00:32:17.288 08:28:49 -- nvmf/nvmf.sh@87 -- # timing_enter host 00:32:17.288 08:28:49 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:17.288 08:28:49 -- common/autotest_common.sh@10 -- # set +x 00:32:17.288 08:28:49 -- nvmf/nvmf.sh@89 -- # [[ 1 -eq 0 ]] 00:32:17.288 08:28:49 -- nvmf/nvmf.sh@96 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:32:17.288 08:28:49 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:32:17.288 08:28:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:17.288 08:28:49 -- common/autotest_common.sh@10 -- # set +x 00:32:17.288 ************************************ 00:32:17.288 START TEST nvmf_identify 00:32:17.288 ************************************ 00:32:17.288 08:28:49 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:32:17.288 * Looking for test storage... 00:32:17.288 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:32:17.288 08:28:49 -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:32:17.288 08:28:49 -- nvmf/common.sh@7 -- # uname -s 00:32:17.288 08:28:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:17.288 08:28:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:17.288 08:28:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:17.288 08:28:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:17.288 08:28:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:17.288 08:28:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:17.288 08:28:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:17.288 08:28:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:17.288 08:28:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:17.288 08:28:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:17.288 08:28:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ce38300-f67f-48af-81f9-d51a7c54746d 00:32:17.288 08:28:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ce38300-f67f-48af-81f9-d51a7c54746d 00:32:17.288 08:28:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:17.288 08:28:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:17.288 08:28:49 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:32:17.288 08:28:49 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:17.288 08:28:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:17.288 08:28:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:17.288 08:28:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:17.289 08:28:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:17.289 08:28:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:17.289 08:28:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:17.289 08:28:49 -- paths/export.sh@5 -- # export PATH 00:32:17.289 08:28:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:17.289 08:28:49 -- nvmf/common.sh@46 -- # : 0 00:32:17.289 08:28:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:32:17.289 08:28:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:32:17.289 08:28:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:32:17.289 08:28:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:17.289 08:28:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:17.289 08:28:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:32:17.289 08:28:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:32:17.289 08:28:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:32:17.289 08:28:49 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:17.289 08:28:49 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:17.289 08:28:49 -- host/identify.sh@14 -- # nvmftestinit 00:32:17.289 08:28:49 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:32:17.289 08:28:49 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:17.289 08:28:49 -- nvmf/common.sh@436 -- # prepare_net_devs 00:32:17.289 08:28:49 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:32:17.289 08:28:49 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:32:17.289 08:28:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:17.289 08:28:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:17.289 08:28:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:17.289 08:28:49 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:32:17.289 08:28:49 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:32:17.289 08:28:49 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:32:17.289 08:28:49 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:32:17.289 08:28:49 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:32:17.289 08:28:49 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:32:17.289 08:28:49 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:17.289 08:28:49 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:17.289 08:28:49 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:32:17.289 08:28:49 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:32:17.289 08:28:49 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:32:17.289 08:28:49 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:32:17.289 08:28:49 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:32:17.289 08:28:49 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:17.289 08:28:49 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:32:17.289 08:28:49 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:32:17.289 08:28:49 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:32:17.289 08:28:49 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:32:17.289 08:28:49 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:32:17.289 08:28:49 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:32:17.289 Cannot find device "nvmf_tgt_br" 00:32:17.289 08:28:49 -- nvmf/common.sh@154 -- # true 00:32:17.289 08:28:49 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:32:17.289 Cannot find device "nvmf_tgt_br2" 00:32:17.289 08:28:49 -- nvmf/common.sh@155 -- # true 00:32:17.289 08:28:49 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:32:17.289 08:28:49 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:32:17.289 Cannot find device "nvmf_tgt_br" 00:32:17.289 08:28:49 -- nvmf/common.sh@157 -- # true 00:32:17.289 08:28:49 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:32:17.289 Cannot find device "nvmf_tgt_br2" 00:32:17.289 08:28:49 -- nvmf/common.sh@158 -- # true 00:32:17.289 08:28:49 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:32:17.289 08:28:49 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:32:17.289 08:28:49 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:32:17.289 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:17.289 08:28:49 -- nvmf/common.sh@161 -- # true 00:32:17.289 08:28:49 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:32:17.289 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:17.289 08:28:49 -- nvmf/common.sh@162 -- # true 00:32:17.289 08:28:49 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:32:17.289 08:28:49 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:32:17.289 08:28:49 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:32:17.289 08:28:49 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:32:17.289 08:28:49 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:32:17.289 08:28:49 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:32:17.289 08:28:49 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:32:17.289 08:28:49 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:32:17.289 08:28:49 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:32:17.289 08:28:49 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:32:17.289 08:28:49 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:32:17.289 08:28:49 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:32:17.289 08:28:49 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:32:17.289 08:28:49 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:32:17.289 08:28:49 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:32:17.289 08:28:49 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:32:17.289 08:28:49 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:32:17.289 08:28:49 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:32:17.289 08:28:49 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:32:17.289 08:28:49 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:32:17.289 08:28:49 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:32:17.289 08:28:49 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:32:17.289 08:28:49 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:32:17.289 08:28:49 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:32:17.289 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:17.289 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.138 ms 00:32:17.289 00:32:17.289 --- 10.0.0.2 ping statistics --- 00:32:17.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:17.289 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:32:17.289 08:28:49 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:32:17.289 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:32:17.289 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:32:17.289 00:32:17.289 --- 10.0.0.3 ping statistics --- 00:32:17.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:17.289 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:32:17.289 08:28:49 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:32:17.289 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:17.289 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:32:17.289 00:32:17.289 --- 10.0.0.1 ping statistics --- 00:32:17.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:17.289 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:32:17.289 08:28:49 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:17.289 08:28:49 -- nvmf/common.sh@421 -- # return 0 00:32:17.289 08:28:49 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:32:17.289 08:28:49 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:17.289 08:28:49 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:32:17.289 08:28:49 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:32:17.289 08:28:49 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:17.289 08:28:49 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:32:17.289 08:28:49 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:32:17.289 08:28:49 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:32:17.289 08:28:49 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:17.289 08:28:49 -- common/autotest_common.sh@10 -- # set +x 00:32:17.289 08:28:49 -- host/identify.sh@19 -- # nvmfpid=68514 00:32:17.289 08:28:49 -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:32:17.289 08:28:49 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:17.289 08:28:49 -- host/identify.sh@23 -- # waitforlisten 68514 00:32:17.289 08:28:49 -- common/autotest_common.sh@819 -- # '[' -z 68514 ']' 00:32:17.289 08:28:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:17.289 08:28:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:17.289 08:28:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:17.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:17.289 08:28:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:17.289 08:28:49 -- common/autotest_common.sh@10 -- # set +x 00:32:17.289 [2024-04-17 08:28:49.659367] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:32:17.289 [2024-04-17 08:28:49.659444] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:17.289 [2024-04-17 08:28:49.797958] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:17.289 [2024-04-17 08:28:49.895543] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:17.289 [2024-04-17 08:28:49.895717] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:17.289 [2024-04-17 08:28:49.895726] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:17.289 [2024-04-17 08:28:49.895731] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:17.289 [2024-04-17 08:28:49.895808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:17.289 [2024-04-17 08:28:49.895959] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:17.289 [2024-04-17 08:28:49.896151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:17.289 [2024-04-17 08:28:49.896154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:32:17.289 08:28:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:17.289 08:28:50 -- common/autotest_common.sh@852 -- # return 0 00:32:17.289 08:28:50 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:17.289 08:28:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:17.289 08:28:50 -- common/autotest_common.sh@10 -- # set +x 00:32:17.289 [2024-04-17 08:28:50.509627] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:17.289 08:28:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:17.289 08:28:50 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:32:17.289 08:28:50 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:17.289 08:28:50 -- common/autotest_common.sh@10 -- # set +x 00:32:17.289 08:28:50 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:17.289 08:28:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:17.289 08:28:50 -- common/autotest_common.sh@10 -- # set +x 00:32:17.289 Malloc0 00:32:17.289 08:28:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:17.289 08:28:50 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:17.289 08:28:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:17.289 08:28:50 -- common/autotest_common.sh@10 -- # set +x 00:32:17.289 08:28:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:17.289 08:28:50 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:32:17.289 08:28:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:17.289 08:28:50 -- common/autotest_common.sh@10 -- # set +x 00:32:17.289 08:28:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:17.289 08:28:50 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:17.289 08:28:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:17.289 08:28:50 -- common/autotest_common.sh@10 -- # set +x 00:32:17.552 [2024-04-17 08:28:50.622728] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:17.552 08:28:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:17.552 08:28:50 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:17.552 08:28:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:17.552 08:28:50 -- common/autotest_common.sh@10 -- # set +x 00:32:17.552 08:28:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:17.552 08:28:50 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:32:17.552 08:28:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:17.552 08:28:50 -- common/autotest_common.sh@10 -- # set +x 00:32:17.552 [2024-04-17 08:28:50.638481] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:32:17.552 [ 00:32:17.552 { 00:32:17.552 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:32:17.552 "subtype": "Discovery", 00:32:17.552 "listen_addresses": [ 00:32:17.552 { 00:32:17.552 "transport": "TCP", 00:32:17.552 "trtype": "TCP", 00:32:17.552 "adrfam": "IPv4", 00:32:17.552 "traddr": "10.0.0.2", 00:32:17.552 "trsvcid": "4420" 00:32:17.552 } 00:32:17.552 ], 00:32:17.552 "allow_any_host": true, 00:32:17.552 "hosts": [] 00:32:17.552 }, 00:32:17.552 { 00:32:17.552 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:17.552 "subtype": "NVMe", 00:32:17.552 "listen_addresses": [ 00:32:17.552 { 00:32:17.552 "transport": "TCP", 00:32:17.552 "trtype": "TCP", 00:32:17.552 "adrfam": "IPv4", 00:32:17.552 "traddr": "10.0.0.2", 00:32:17.552 "trsvcid": "4420" 00:32:17.552 } 00:32:17.552 ], 00:32:17.552 "allow_any_host": true, 00:32:17.552 "hosts": [], 00:32:17.552 "serial_number": "SPDK00000000000001", 00:32:17.552 "model_number": "SPDK bdev Controller", 00:32:17.552 "max_namespaces": 32, 00:32:17.552 "min_cntlid": 1, 00:32:17.552 "max_cntlid": 65519, 00:32:17.552 "namespaces": [ 00:32:17.552 { 00:32:17.552 "nsid": 1, 00:32:17.552 "bdev_name": "Malloc0", 00:32:17.552 "name": "Malloc0", 00:32:17.552 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:32:17.552 "eui64": "ABCDEF0123456789", 00:32:17.552 "uuid": "0cd123af-d1bb-455c-b9e0-9417ea491a87" 00:32:17.552 } 00:32:17.552 ] 00:32:17.552 } 00:32:17.552 ] 00:32:17.552 08:28:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:17.552 08:28:50 -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:32:17.552 [2024-04-17 08:28:50.670263] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:32:17.552 [2024-04-17 08:28:50.670385] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68545 ] 00:32:17.552 [2024-04-17 08:28:50.806756] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:32:17.552 [2024-04-17 08:28:50.806819] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:32:17.552 [2024-04-17 08:28:50.806824] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:32:17.552 [2024-04-17 08:28:50.806834] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:32:17.552 [2024-04-17 08:28:50.806844] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl uring 00:32:17.552 [2024-04-17 08:28:50.806950] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:32:17.552 [2024-04-17 08:28:50.806986] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1734270 0 00:32:17.552 [2024-04-17 08:28:50.813312] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:32:17.552 [2024-04-17 08:28:50.813331] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:32:17.552 [2024-04-17 08:28:50.813335] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:32:17.552 [2024-04-17 08:28:50.813338] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:32:17.552 [2024-04-17 08:28:50.813376] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:17.552 [2024-04-17 08:28:50.813381] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:17.552 [2024-04-17 08:28:50.813384] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1734270) 00:32:17.552 [2024-04-17 08:28:50.813397] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:32:17.552 [2024-04-17 08:28:50.813417] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17736d0, cid 0, qid 0 00:32:17.552 [2024-04-17 08:28:50.821332] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:17.552 [2024-04-17 08:28:50.821347] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:17.552 [2024-04-17 08:28:50.821350] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:17.552 [2024-04-17 08:28:50.821353] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17736d0) on tqpair=0x1734270 00:32:17.552 [2024-04-17 08:28:50.821361] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:32:17.552 [2024-04-17 08:28:50.821367] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:32:17.552 [2024-04-17 08:28:50.821371] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:32:17.552 [2024-04-17 08:28:50.821386] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:17.552 [2024-04-17 08:28:50.821389] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:17.552 [2024-04-17 08:28:50.821392] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1734270) 00:32:17.552 [2024-04-17 08:28:50.821399] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.552 [2024-04-17 08:28:50.821416] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17736d0, cid 0, qid 0 00:32:17.552 [2024-04-17 08:28:50.821471] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:17.552 [2024-04-17 08:28:50.821475] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:17.552 [2024-04-17 08:28:50.821478] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:17.552 [2024-04-17 08:28:50.821482] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17736d0) on tqpair=0x1734270 00:32:17.552 [2024-04-17 08:28:50.821489] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:32:17.552 [2024-04-17 08:28:50.821494] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:32:17.552 [2024-04-17 08:28:50.821499] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:17.552 [2024-04-17 08:28:50.821502] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:17.552 [2024-04-17 08:28:50.821505] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1734270) 00:32:17.552 [2024-04-17 08:28:50.821510] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.552 [2024-04-17 08:28:50.821522] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17736d0, cid 0, qid 0 00:32:17.552 [2024-04-17 08:28:50.821564] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:17.552 [2024-04-17 08:28:50.821569] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:17.552 [2024-04-17 08:28:50.821572] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:17.552 [2024-04-17 08:28:50.821574] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17736d0) on tqpair=0x1734270 00:32:17.552 [2024-04-17 08:28:50.821579] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:32:17.552 [2024-04-17 08:28:50.821585] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:32:17.552 [2024-04-17 08:28:50.821589] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:17.552 [2024-04-17 08:28:50.821592] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:17.552 [2024-04-17 08:28:50.821594] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1734270) 00:32:17.552 [2024-04-17 08:28:50.821600] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.552 [2024-04-17 08:28:50.821610] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17736d0, cid 0, qid 0 00:32:17.552 [2024-04-17 08:28:50.821653] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:17.552 [2024-04-17 08:28:50.821658] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:17.552 [2024-04-17 08:28:50.821660] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:17.552 [2024-04-17 08:28:50.821663] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17736d0) on tqpair=0x1734270 00:32:17.552 [2024-04-17 08:28:50.821667] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:32:17.552 [2024-04-17 08:28:50.821674] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:17.552 [2024-04-17 08:28:50.821677] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:17.552 [2024-04-17 08:28:50.821680] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1734270) 00:32:17.552 [2024-04-17 08:28:50.821685] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.552 [2024-04-17 08:28:50.821696] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17736d0, cid 0, qid 0 00:32:17.552 [2024-04-17 08:28:50.821739] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:17.552 [2024-04-17 08:28:50.821744] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:17.552 [2024-04-17 08:28:50.821747] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:17.552 [2024-04-17 08:28:50.821750] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17736d0) on tqpair=0x1734270 00:32:17.552 [2024-04-17 08:28:50.821753] nvme_ctrlr.c:3736:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:32:17.552 [2024-04-17 08:28:50.821757] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:32:17.552 [2024-04-17 08:28:50.821762] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:32:17.552 [2024-04-17 08:28:50.821866] nvme_ctrlr.c:3929:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:32:17.552 [2024-04-17 08:28:50.821871] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:32:17.552 [2024-04-17 08:28:50.821878] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:17.552 [2024-04-17 08:28:50.821881] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:17.552 [2024-04-17 08:28:50.821883] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1734270) 00:32:17.552 [2024-04-17 08:28:50.821889] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.552 [2024-04-17 08:28:50.821900] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17736d0, cid 0, qid 0 00:32:17.552 [2024-04-17 08:28:50.821940] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:17.552 [2024-04-17 08:28:50.821945] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:17.552 [2024-04-17 08:28:50.821947] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:17.552 [2024-04-17 08:28:50.821950] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17736d0) on tqpair=0x1734270 00:32:17.552 [2024-04-17 08:28:50.821954] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:32:17.552 [2024-04-17 08:28:50.821961] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:17.552 [2024-04-17 08:28:50.821964] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:17.552 [2024-04-17 08:28:50.821966] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1734270) 00:32:17.552 [2024-04-17 08:28:50.821972] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.552 [2024-04-17 08:28:50.821982] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17736d0, cid 0, qid 0 00:32:17.552 [2024-04-17 08:28:50.822021] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:17.552 [2024-04-17 08:28:50.822026] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:17.552 [2024-04-17 08:28:50.822029] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:17.552 [2024-04-17 08:28:50.822032] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17736d0) on tqpair=0x1734270 00:32:17.552 [2024-04-17 08:28:50.822036] nvme_ctrlr.c:3771:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:32:17.552 [2024-04-17 08:28:50.822039] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:32:17.552 [2024-04-17 08:28:50.822045] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:32:17.552 [2024-04-17 08:28:50.822051] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:32:17.552 [2024-04-17 08:28:50.822058] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:17.552 [2024-04-17 08:28:50.822061] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:17.552 [2024-04-17 08:28:50.822064] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1734270) 00:32:17.552 [2024-04-17 08:28:50.822069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.553 [2024-04-17 08:28:50.822080] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17736d0, cid 0, qid 0 00:32:17.553 [2024-04-17 08:28:50.822153] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:17.553 [2024-04-17 08:28:50.822158] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:17.553 [2024-04-17 08:28:50.822160] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:17.553 [2024-04-17 08:28:50.822163] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1734270): datao=0, datal=4096, cccid=0 00:32:17.553 [2024-04-17 08:28:50.822167] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x17736d0) on tqpair(0x1734270): expected_datao=0, payload_size=4096 00:32:17.553 [2024-04-17 08:28:50.822175] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:17.553 [2024-04-17 08:28:50.822177] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:17.553 [2024-04-17 08:28:50.822184] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:17.553 [2024-04-17 08:28:50.822189] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:17.553 [2024-04-17 08:28:50.822191] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:17.553 [2024-04-17 08:28:50.822194] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17736d0) on tqpair=0x1734270 00:32:17.553 [2024-04-17 08:28:50.822200] nvme_ctrlr.c:1971:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:32:17.553 [2024-04-17 08:28:50.822206] nvme_ctrlr.c:1975:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:32:17.553 [2024-04-17 08:28:50.822209] nvme_ctrlr.c:1978:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:32:17.553 [2024-04-17 08:28:50.822213] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:32:17.553 [2024-04-17 08:28:50.822216] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:32:17.553 [2024-04-17 08:28:50.822219] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:32:17.553 [2024-04-17 08:28:50.822234] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:32:17.553 [2024-04-17 08:28:50.822239] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:17.553 [2024-04-17 08:28:50.822242] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:17.553 [2024-04-17 08:28:50.822244] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1734270) 00:32:17.553 [2024-04-17 08:28:50.822250] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:17.553 [2024-04-17 08:28:50.822262] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17736d0, cid 0, qid 0 00:32:17.553 [2024-04-17 08:28:50.822323] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:17.553 [2024-04-17 08:28:50.822329] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:17.553 [2024-04-17 08:28:50.822331] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:17.553 [2024-04-17 08:28:50.822334] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17736d0) on tqpair=0x1734270 00:32:17.553 [2024-04-17 08:28:50.822341] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:17.553 [2024-04-17 08:28:50.822344] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:17.553 [2024-04-17 08:28:50.822347] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1734270) 00:32:17.553 [2024-04-17 08:28:50.822352] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:17.553 [2024-04-17 08:28:50.822356] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:17.553 [2024-04-17 08:28:50.822359] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:17.553 [2024-04-17 08:28:50.822362] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1734270) 00:32:17.553 [2024-04-17 08:28:50.822366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:17.553 [2024-04-17 08:28:50.822370] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:17.553 [2024-04-17 08:28:50.822373] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:17.553 [2024-04-17 08:28:50.822375] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1734270) 00:32:17.553 [2024-04-17 08:28:50.822380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:17.553 [2024-04-17 08:28:50.822384] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:17.553 [2024-04-17 08:28:50.822387] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:17.553 [2024-04-17 08:28:50.822389] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1734270) 00:32:17.553 [2024-04-17 08:28:50.822394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:17.553 [2024-04-17 08:28:50.822397] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:32:17.553 [2024-04-17 08:28:50.822405] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:32:17.553 [2024-04-17 08:28:50.822410] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:17.553 [2024-04-17 08:28:50.822413] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:17.553 [2024-04-17 08:28:50.822415] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1734270) 00:32:17.553 [2024-04-17 08:28:50.822421] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.553 [2024-04-17 08:28:50.822434] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17736d0, cid 0, qid 0 00:32:17.553 [2024-04-17 08:28:50.822439] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1773830, cid 1, qid 0 00:32:17.553 [2024-04-17 08:28:50.822442] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1773990, cid 2, qid 0 00:32:17.553 [2024-04-17 08:28:50.822446] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1773af0, cid 3, qid 0 00:32:17.553 [2024-04-17 08:28:50.822449] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1773c50, cid 4, qid 0 00:32:17.553 [2024-04-17 08:28:50.822537] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:17.553 [2024-04-17 08:28:50.822547] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:17.553 [2024-04-17 08:28:50.822550] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:17.553 [2024-04-17 08:28:50.822553] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1773c50) on tqpair=0x1734270 00:32:17.553 [2024-04-17 08:28:50.822557] nvme_ctrlr.c:2889:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:32:17.553 [2024-04-17 08:28:50.822561] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:32:17.553 [2024-04-17 08:28:50.822569] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:17.553 [2024-04-17 08:28:50.822572] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:17.553 [2024-04-17 08:28:50.822575] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1734270) 00:32:17.553 [2024-04-17 08:28:50.822580] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.553 [2024-04-17 08:28:50.822591] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1773c50, cid 4, qid 0 00:32:17.553 [2024-04-17 08:28:50.822645] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:17.553 [2024-04-17 08:28:50.822650] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:17.553 [2024-04-17 08:28:50.822652] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:17.553 [2024-04-17 08:28:50.822655] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1734270): datao=0, datal=4096, cccid=4 00:32:17.553 [2024-04-17 08:28:50.822659] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1773c50) on tqpair(0x1734270): expected_datao=0, payload_size=4096 00:32:17.553 [2024-04-17 08:28:50.822665] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:17.553 [2024-04-17 08:28:50.822667] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:17.553 [2024-04-17 08:28:50.822674] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:17.553 [2024-04-17 08:28:50.822678] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:17.553 [2024-04-17 08:28:50.822681] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:17.553 [2024-04-17 08:28:50.822683] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1773c50) on tqpair=0x1734270 00:32:17.553 [2024-04-17 08:28:50.822693] nvme_ctrlr.c:4023:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:32:17.553 [2024-04-17 08:28:50.822711] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:17.553 [2024-04-17 08:28:50.822714] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:17.553 [2024-04-17 08:28:50.822716] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1734270) 00:32:17.553 [2024-04-17 08:28:50.822721] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.553 [2024-04-17 08:28:50.822726] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:17.553 [2024-04-17 08:28:50.822729] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:17.553 [2024-04-17 08:28:50.822732] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1734270) 00:32:17.553 [2024-04-17 08:28:50.822737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:32:17.553 [2024-04-17 08:28:50.822755] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1773c50, cid 4, qid 0 00:32:17.553 [2024-04-17 08:28:50.822760] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1773db0, cid 5, qid 0 00:32:17.553 [2024-04-17 08:28:50.822841] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:17.553 [2024-04-17 08:28:50.822848] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:17.553 [2024-04-17 08:28:50.822850] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:17.553 [2024-04-17 08:28:50.822853] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1734270): datao=0, datal=1024, cccid=4 00:32:17.553 [2024-04-17 08:28:50.822856] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1773c50) on tqpair(0x1734270): expected_datao=0, payload_size=1024 00:32:17.553 [2024-04-17 08:28:50.822862] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:17.553 [2024-04-17 08:28:50.822864] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:17.553 [2024-04-17 08:28:50.822869] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:17.553 [2024-04-17 08:28:50.822873] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:17.553 [2024-04-17 08:28:50.822876] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:17.553 [2024-04-17 08:28:50.822879] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1773db0) on tqpair=0x1734270 00:32:17.553 [2024-04-17 08:28:50.822891] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:17.553 [2024-04-17 08:28:50.822896] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:17.553 [2024-04-17 08:28:50.822899] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:17.553 [2024-04-17 08:28:50.822902] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1773c50) on tqpair=0x1734270 00:32:17.553 [2024-04-17 08:28:50.822914] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:17.553 [2024-04-17 08:28:50.822917] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:17.553 [2024-04-17 08:28:50.822919] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1734270) 00:32:17.553 [2024-04-17 08:28:50.822925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.553 [2024-04-17 08:28:50.822939] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1773c50, cid 4, qid 0 00:32:17.553 [2024-04-17 08:28:50.822988] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:17.553 [2024-04-17 08:28:50.822993] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:17.553 [2024-04-17 08:28:50.822996] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:17.553 [2024-04-17 08:28:50.822998] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1734270): datao=0, datal=3072, cccid=4 00:32:17.553 [2024-04-17 08:28:50.823001] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1773c50) on tqpair(0x1734270): expected_datao=0, payload_size=3072 00:32:17.553 [2024-04-17 08:28:50.823008] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:17.553 [2024-04-17 08:28:50.823010] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:17.553 [2024-04-17 08:28:50.823016] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:17.553 [2024-04-17 08:28:50.823021] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:17.553 [2024-04-17 08:28:50.823023] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:17.553 [2024-04-17 08:28:50.823026] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1773c50) on tqpair=0x1734270 00:32:17.553 [2024-04-17 08:28:50.823032] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:17.553 [2024-04-17 08:28:50.823035] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:17.553 [2024-04-17 08:28:50.823038] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1734270) 00:32:17.553 [2024-04-17 08:28:50.823043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.553 [2024-04-17 08:28:50.823056] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1773c50, cid 4, qid 0 00:32:17.553 [2024-04-17 08:28:50.823105] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:17.553 [2024-04-17 08:28:50.823110] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:17.553 [2024-04-17 08:28:50.823112] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:17.553 [2024-04-17 08:28:50.823115] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1734270): datao=0, datal=8, cccid=4 00:32:17.553 [2024-04-17 08:28:50.823118] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1773c50) on tqpair(0x1734270): expected_datao=0, payload_size=8 00:32:17.553 [2024-04-17 08:28:50.823123] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:17.553 [2024-04-17 08:28:50.823126] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:17.553 [2024-04-17 08:28:50.823136] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:17.553 [2024-04-17 08:28:50.823141] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:17.553 [2024-04-17 08:28:50.823143] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:17.553 [2024-04-17 08:28:50.823146] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1773c50) on tqpair=0x1734270 00:32:17.553 ===================================================== 00:32:17.553 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:32:17.553 ===================================================== 00:32:17.553 Controller Capabilities/Features 00:32:17.553 ================================ 00:32:17.553 Vendor ID: 0000 00:32:17.553 Subsystem Vendor ID: 0000 00:32:17.553 Serial Number: .................... 00:32:17.553 Model Number: ........................................ 00:32:17.553 Firmware Version: 24.01.1 00:32:17.553 Recommended Arb Burst: 0 00:32:17.553 IEEE OUI Identifier: 00 00 00 00:32:17.553 Multi-path I/O 00:32:17.553 May have multiple subsystem ports: No 00:32:17.553 May have multiple controllers: No 00:32:17.554 Associated with SR-IOV VF: No 00:32:17.554 Max Data Transfer Size: 131072 00:32:17.554 Max Number of Namespaces: 0 00:32:17.554 Max Number of I/O Queues: 1024 00:32:17.554 NVMe Specification Version (VS): 1.3 00:32:17.554 NVMe Specification Version (Identify): 1.3 00:32:17.554 Maximum Queue Entries: 128 00:32:17.554 Contiguous Queues Required: Yes 00:32:17.554 Arbitration Mechanisms Supported 00:32:17.554 Weighted Round Robin: Not Supported 00:32:17.554 Vendor Specific: Not Supported 00:32:17.554 Reset Timeout: 15000 ms 00:32:17.554 Doorbell Stride: 4 bytes 00:32:17.554 NVM Subsystem Reset: Not Supported 00:32:17.554 Command Sets Supported 00:32:17.554 NVM Command Set: Supported 00:32:17.554 Boot Partition: Not Supported 00:32:17.554 Memory Page Size Minimum: 4096 bytes 00:32:17.554 Memory Page Size Maximum: 4096 bytes 00:32:17.554 Persistent Memory Region: Not Supported 00:32:17.554 Optional Asynchronous Events Supported 00:32:17.554 Namespace Attribute Notices: Not Supported 00:32:17.554 Firmware Activation Notices: Not Supported 00:32:17.554 ANA Change Notices: Not Supported 00:32:17.554 PLE Aggregate Log Change Notices: Not Supported 00:32:17.554 LBA Status Info Alert Notices: Not Supported 00:32:17.554 EGE Aggregate Log Change Notices: Not Supported 00:32:17.554 Normal NVM Subsystem Shutdown event: Not Supported 00:32:17.554 Zone Descriptor Change Notices: Not Supported 00:32:17.554 Discovery Log Change Notices: Supported 00:32:17.554 Controller Attributes 00:32:17.554 128-bit Host Identifier: Not Supported 00:32:17.554 Non-Operational Permissive Mode: Not Supported 00:32:17.554 NVM Sets: Not Supported 00:32:17.554 Read Recovery Levels: Not Supported 00:32:17.554 Endurance Groups: Not Supported 00:32:17.554 Predictable Latency Mode: Not Supported 00:32:17.554 Traffic Based Keep ALive: Not Supported 00:32:17.554 Namespace Granularity: Not Supported 00:32:17.554 SQ Associations: Not Supported 00:32:17.554 UUID List: Not Supported 00:32:17.554 Multi-Domain Subsystem: Not Supported 00:32:17.554 Fixed Capacity Management: Not Supported 00:32:17.554 Variable Capacity Management: Not Supported 00:32:17.554 Delete Endurance Group: Not Supported 00:32:17.554 Delete NVM Set: Not Supported 00:32:17.554 Extended LBA Formats Supported: Not Supported 00:32:17.554 Flexible Data Placement Supported: Not Supported 00:32:17.554 00:32:17.554 Controller Memory Buffer Support 00:32:17.554 ================================ 00:32:17.554 Supported: No 00:32:17.554 00:32:17.554 Persistent Memory Region Support 00:32:17.554 ================================ 00:32:17.554 Supported: No 00:32:17.554 00:32:17.554 Admin Command Set Attributes 00:32:17.554 ============================ 00:32:17.554 Security Send/Receive: Not Supported 00:32:17.554 Format NVM: Not Supported 00:32:17.554 Firmware Activate/Download: Not Supported 00:32:17.554 Namespace Management: Not Supported 00:32:17.554 Device Self-Test: Not Supported 00:32:17.554 Directives: Not Supported 00:32:17.554 NVMe-MI: Not Supported 00:32:17.554 Virtualization Management: Not Supported 00:32:17.554 Doorbell Buffer Config: Not Supported 00:32:17.554 Get LBA Status Capability: Not Supported 00:32:17.554 Command & Feature Lockdown Capability: Not Supported 00:32:17.554 Abort Command Limit: 1 00:32:17.554 Async Event Request Limit: 4 00:32:17.554 Number of Firmware Slots: N/A 00:32:17.554 Firmware Slot 1 Read-Only: N/A 00:32:17.554 Firmware Activation Without Reset: N/A 00:32:17.554 Multiple Update Detection Support: N/A 00:32:17.554 Firmware Update Granularity: No Information Provided 00:32:17.554 Per-Namespace SMART Log: No 00:32:17.554 Asymmetric Namespace Access Log Page: Not Supported 00:32:17.554 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:32:17.554 Command Effects Log Page: Not Supported 00:32:17.554 Get Log Page Extended Data: Supported 00:32:17.554 Telemetry Log Pages: Not Supported 00:32:17.554 Persistent Event Log Pages: Not Supported 00:32:17.554 Supported Log Pages Log Page: May Support 00:32:17.554 Commands Supported & Effects Log Page: Not Supported 00:32:17.554 Feature Identifiers & Effects Log Page:May Support 00:32:17.554 NVMe-MI Commands & Effects Log Page: May Support 00:32:17.554 Data Area 4 for Telemetry Log: Not Supported 00:32:17.554 Error Log Page Entries Supported: 128 00:32:17.554 Keep Alive: Not Supported 00:32:17.554 00:32:17.554 NVM Command Set Attributes 00:32:17.554 ========================== 00:32:17.554 Submission Queue Entry Size 00:32:17.554 Max: 1 00:32:17.554 Min: 1 00:32:17.554 Completion Queue Entry Size 00:32:17.554 Max: 1 00:32:17.554 Min: 1 00:32:17.554 Number of Namespaces: 0 00:32:17.554 Compare Command: Not Supported 00:32:17.554 Write Uncorrectable Command: Not Supported 00:32:17.554 Dataset Management Command: Not Supported 00:32:17.554 Write Zeroes Command: Not Supported 00:32:17.554 Set Features Save Field: Not Supported 00:32:17.554 Reservations: Not Supported 00:32:17.554 Timestamp: Not Supported 00:32:17.554 Copy: Not Supported 00:32:17.554 Volatile Write Cache: Not Present 00:32:17.554 Atomic Write Unit (Normal): 1 00:32:17.554 Atomic Write Unit (PFail): 1 00:32:17.554 Atomic Compare & Write Unit: 1 00:32:17.554 Fused Compare & Write: Supported 00:32:17.554 Scatter-Gather List 00:32:17.554 SGL Command Set: Supported 00:32:17.554 SGL Keyed: Supported 00:32:17.554 SGL Bit Bucket Descriptor: Not Supported 00:32:17.554 SGL Metadata Pointer: Not Supported 00:32:17.554 Oversized SGL: Not Supported 00:32:17.554 SGL Metadata Address: Not Supported 00:32:17.554 SGL Offset: Supported 00:32:17.554 Transport SGL Data Block: Not Supported 00:32:17.554 Replay Protected Memory Block: Not Supported 00:32:17.554 00:32:17.554 Firmware Slot Information 00:32:17.554 ========================= 00:32:17.554 Active slot: 0 00:32:17.554 00:32:17.554 00:32:17.554 Error Log 00:32:17.554 ========= 00:32:17.554 00:32:17.554 Active Namespaces 00:32:17.554 ================= 00:32:17.554 Discovery Log Page 00:32:17.554 ================== 00:32:17.554 Generation Counter: 2 00:32:17.554 Number of Records: 2 00:32:17.554 Record Format: 0 00:32:17.554 00:32:17.554 Discovery Log Entry 0 00:32:17.554 ---------------------- 00:32:17.554 Transport Type: 3 (TCP) 00:32:17.554 Address Family: 1 (IPv4) 00:32:17.554 Subsystem Type: 3 (Current Discovery Subsystem) 00:32:17.554 Entry Flags: 00:32:17.554 Duplicate Returned Information: 1 00:32:17.554 Explicit Persistent Connection Support for Discovery: 1 00:32:17.554 Transport Requirements: 00:32:17.554 Secure Channel: Not Required 00:32:17.554 Port ID: 0 (0x0000) 00:32:17.554 Controller ID: 65535 (0xffff) 00:32:17.554 Admin Max SQ Size: 128 00:32:17.554 Transport Service Identifier: 4420 00:32:17.554 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:32:17.554 Transport Address: 10.0.0.2 00:32:17.554 Discovery Log Entry 1 00:32:17.554 ---------------------- 00:32:17.554 Transport Type: 3 (TCP) 00:32:17.554 Address Family: 1 (IPv4) 00:32:17.554 Subsystem Type: 2 (NVM Subsystem) 00:32:17.554 Entry Flags: 00:32:17.554 Duplicate Returned Information: 0 00:32:17.554 Explicit Persistent Connection Support for Discovery: 0 00:32:17.554 Transport Requirements: 00:32:17.554 Secure Channel: Not Required 00:32:17.554 Port ID: 0 (0x0000) 00:32:17.554 Controller ID: 65535 (0xffff) 00:32:17.554 Admin Max SQ Size: 128 00:32:17.554 Transport Service Identifier: 4420 00:32:17.554 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:32:17.554 Transport Address: 10.0.0.2 [2024-04-17 08:28:50.823224] nvme_ctrlr.c:4206:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:32:17.554 [2024-04-17 08:28:50.823234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.554 [2024-04-17 08:28:50.823240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.554 [2024-04-17 08:28:50.823245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.554 [2024-04-17 08:28:50.823249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.554 [2024-04-17 08:28:50.823258] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:17.554 [2024-04-17 08:28:50.823261] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:17.554 [2024-04-17 08:28:50.823263] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1734270) 00:32:17.554 [2024-04-17 08:28:50.823269] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.554 [2024-04-17 08:28:50.823281] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1773af0, cid 3, qid 0 00:32:17.554 [2024-04-17 08:28:50.823337] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:17.554 [2024-04-17 08:28:50.823343] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:17.554 [2024-04-17 08:28:50.823345] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:17.554 [2024-04-17 08:28:50.823348] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1773af0) on tqpair=0x1734270 00:32:17.554 [2024-04-17 08:28:50.823354] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:17.554 [2024-04-17 08:28:50.823357] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:17.554 [2024-04-17 08:28:50.823359] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1734270) 00:32:17.554 [2024-04-17 08:28:50.823365] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.554 [2024-04-17 08:28:50.823378] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1773af0, cid 3, qid 0 00:32:17.554 [2024-04-17 08:28:50.823435] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:17.554 [2024-04-17 08:28:50.823442] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:17.554 [2024-04-17 08:28:50.823445] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:17.554 [2024-04-17 08:28:50.823447] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1773af0) on tqpair=0x1734270 00:32:17.554 [2024-04-17 08:28:50.823452] nvme_ctrlr.c:1069:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:32:17.554 [2024-04-17 08:28:50.823455] nvme_ctrlr.c:1072:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:32:17.554 [2024-04-17 08:28:50.823462] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:17.554 [2024-04-17 08:28:50.823465] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:17.554 [2024-04-17 08:28:50.823468] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1734270) 00:32:17.554 [2024-04-17 08:28:50.823473] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.554 [2024-04-17 08:28:50.823484] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1773af0, cid 3, qid 0 00:32:17.554 [2024-04-17 08:28:50.823523] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:17.554 [2024-04-17 08:28:50.823528] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:17.554 [2024-04-17 08:28:50.823547] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:17.554 [2024-04-17 08:28:50.823550] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1773af0) on tqpair=0x1734270 00:32:17.554 [2024-04-17 08:28:50.823559] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:17.554 [2024-04-17 08:28:50.823562] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:17.554 [2024-04-17 08:28:50.823565] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1734270) 00:32:17.554 [2024-04-17 08:28:50.823571] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.554 [2024-04-17 08:28:50.823582] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1773af0, cid 3, qid 0 00:32:17.554 [2024-04-17 08:28:50.823621] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:17.554 [2024-04-17 08:28:50.823626] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:17.554 [2024-04-17 08:28:50.823629] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:17.554 [2024-04-17 08:28:50.823632] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1773af0) on tqpair=0x1734270 00:32:17.554 [2024-04-17 08:28:50.823640] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:17.554 [2024-04-17 08:28:50.823643] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:17.554 [2024-04-17 08:28:50.823646] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1734270) 00:32:17.554 [2024-04-17 08:28:50.823651] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.554 [2024-04-17 08:28:50.823663] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1773af0, cid 3, qid 0 00:32:17.554 [2024-04-17 08:28:50.823706] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:17.554 [2024-04-17 08:28:50.823711] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:17.554 [2024-04-17 08:28:50.823714] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:17.554 [2024-04-17 08:28:50.823717] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1773af0) on tqpair=0x1734270 00:32:17.555 [2024-04-17 08:28:50.823725] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:17.555 [2024-04-17 08:28:50.823728] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:17.555 [2024-04-17 08:28:50.823731] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1734270) 00:32:17.555 [2024-04-17 08:28:50.823737] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.555 [2024-04-17 08:28:50.823749] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1773af0, cid 3, qid 0 00:32:17.555 [2024-04-17 08:28:50.823788] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:17.555 [2024-04-17 08:28:50.823793] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:17.555 [2024-04-17 08:28:50.823795] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:17.555 [2024-04-17 08:28:50.823798] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1773af0) on tqpair=0x1734270 00:32:17.555 [2024-04-17 08:28:50.823806] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:17.555 [2024-04-17 08:28:50.823809] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:17.555 [2024-04-17 08:28:50.823812] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1734270) 00:32:17.555 [2024-04-17 08:28:50.823818] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.555 [2024-04-17 08:28:50.823829] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1773af0, cid 3, qid 0 00:32:17.555 [2024-04-17 08:28:50.823870] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:17.555 [2024-04-17 08:28:50.823876] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:17.555 [2024-04-17 08:28:50.823878] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:17.555 [2024-04-17 08:28:50.823881] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1773af0) on tqpair=0x1734270 00:32:17.555 [2024-04-17 08:28:50.823889] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:17.555 [2024-04-17 08:28:50.823892] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:17.555 [2024-04-17 08:28:50.823895] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1734270) 00:32:17.555 [2024-04-17 08:28:50.823901] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.555 [2024-04-17 08:28:50.823912] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1773af0, cid 3, qid 0 00:32:17.555 [2024-04-17 08:28:50.823956] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:17.555 [2024-04-17 08:28:50.823962] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:17.555 [2024-04-17 08:28:50.823964] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:17.555 [2024-04-17 08:28:50.823967] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1773af0) on tqpair=0x1734270 00:32:17.555 [2024-04-17 08:28:50.823975] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:17.555 [2024-04-17 08:28:50.823978] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:17.555 [2024-04-17 08:28:50.823981] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1734270) 00:32:17.555 [2024-04-17 08:28:50.823987] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.555 [2024-04-17 08:28:50.823998] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1773af0, cid 3, qid 0 00:32:17.555 [2024-04-17 08:28:50.824046] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:17.555 [2024-04-17 08:28:50.824052] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:17.555 [2024-04-17 08:28:50.824054] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:17.555 [2024-04-17 08:28:50.824057] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1773af0) on tqpair=0x1734270 00:32:17.555 [2024-04-17 08:28:50.824065] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:17.555 [2024-04-17 08:28:50.824068] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:17.555 [2024-04-17 08:28:50.824071] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1734270) 00:32:17.555 [2024-04-17 08:28:50.824077] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.555 [2024-04-17 08:28:50.824089] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1773af0, cid 3, qid 0 00:32:17.555 [2024-04-17 08:28:50.824133] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:17.555 [2024-04-17 08:28:50.824138] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:17.555 [2024-04-17 08:28:50.824141] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:17.555 [2024-04-17 08:28:50.824144] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1773af0) on tqpair=0x1734270 00:32:17.555 [2024-04-17 08:28:50.824152] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:17.555 [2024-04-17 08:28:50.824155] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:17.555 [2024-04-17 08:28:50.824158] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1734270) 00:32:17.555 [2024-04-17 08:28:50.824163] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.555 [2024-04-17 08:28:50.824175] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1773af0, cid 3, qid 0 00:32:17.555 [2024-04-17 08:28:50.824219] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:17.555 [2024-04-17 08:28:50.824224] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:17.555 [2024-04-17 08:28:50.824227] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:17.555 [2024-04-17 08:28:50.824230] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1773af0) on tqpair=0x1734270 00:32:17.555 [2024-04-17 08:28:50.824238] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:17.555 [2024-04-17 08:28:50.824241] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:17.555 [2024-04-17 08:28:50.824244] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1734270) 00:32:17.555 [2024-04-17 08:28:50.824249] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.555 [2024-04-17 08:28:50.824261] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1773af0, cid 3, qid 0 00:32:17.555 [2024-04-17 08:28:50.824305] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:17.555 [2024-04-17 08:28:50.824310] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:17.555 [2024-04-17 08:28:50.824313] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:17.555 [2024-04-17 08:28:50.824316] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1773af0) on tqpair=0x1734270 00:32:17.555 [2024-04-17 08:28:50.824334] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:17.555 [2024-04-17 08:28:50.824338] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:17.555 [2024-04-17 08:28:50.824341] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1734270) 00:32:17.555 [2024-04-17 08:28:50.824346] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.555 [2024-04-17 08:28:50.824358] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1773af0, cid 3, qid 0 00:32:17.555 [2024-04-17 08:28:50.824400] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:17.555 [2024-04-17 08:28:50.824405] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:17.555 [2024-04-17 08:28:50.824408] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:17.555 [2024-04-17 08:28:50.824411] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1773af0) on tqpair=0x1734270 00:32:17.555 [2024-04-17 08:28:50.824419] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:17.555 [2024-04-17 08:28:50.824422] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:17.555 [2024-04-17 08:28:50.824425] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1734270) 00:32:17.555 [2024-04-17 08:28:50.824431] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.555 [2024-04-17 08:28:50.824443] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1773af0, cid 3, qid 0 00:32:17.555 [2024-04-17 08:28:50.824490] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:17.555 [2024-04-17 08:28:50.824495] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:17.555 [2024-04-17 08:28:50.824497] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:17.555 [2024-04-17 08:28:50.824500] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1773af0) on tqpair=0x1734270 00:32:17.555 [2024-04-17 08:28:50.824509] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:17.555 [2024-04-17 08:28:50.824512] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:17.555 [2024-04-17 08:28:50.824515] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1734270) 00:32:17.555 [2024-04-17 08:28:50.824520] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.555 [2024-04-17 08:28:50.824532] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1773af0, cid 3, qid 0 00:32:17.555 [2024-04-17 08:28:50.824575] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:17.555 [2024-04-17 08:28:50.824580] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:17.555 [2024-04-17 08:28:50.824583] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:17.555 [2024-04-17 08:28:50.824586] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1773af0) on tqpair=0x1734270 00:32:17.555 [2024-04-17 08:28:50.824594] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:17.555 [2024-04-17 08:28:50.824597] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:17.555 [2024-04-17 08:28:50.824599] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1734270) 00:32:17.555 [2024-04-17 08:28:50.824605] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.555 [2024-04-17 08:28:50.824617] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1773af0, cid 3, qid 0 00:32:17.555 [2024-04-17 08:28:50.824665] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:17.555 [2024-04-17 08:28:50.824670] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:17.555 [2024-04-17 08:28:50.824673] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:17.555 [2024-04-17 08:28:50.824676] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1773af0) on tqpair=0x1734270 00:32:17.555 [2024-04-17 08:28:50.824684] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:17.555 [2024-04-17 08:28:50.824692] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:17.555 [2024-04-17 08:28:50.824695] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1734270) 00:32:17.555 [2024-04-17 08:28:50.824701] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.555 [2024-04-17 08:28:50.824713] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1773af0, cid 3, qid 0 00:32:17.555 [2024-04-17 08:28:50.824754] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:17.555 [2024-04-17 08:28:50.824759] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:17.555 [2024-04-17 08:28:50.824762] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:17.555 [2024-04-17 08:28:50.824764] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1773af0) on tqpair=0x1734270 00:32:17.555 [2024-04-17 08:28:50.824772] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:17.555 [2024-04-17 08:28:50.824776] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:17.555 [2024-04-17 08:28:50.824778] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1734270) 00:32:17.555 [2024-04-17 08:28:50.824784] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.555 [2024-04-17 08:28:50.824795] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1773af0, cid 3, qid 0 00:32:17.555 [2024-04-17 08:28:50.824846] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:17.555 [2024-04-17 08:28:50.824851] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:17.555 [2024-04-17 08:28:50.824853] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:17.555 [2024-04-17 08:28:50.824856] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1773af0) on tqpair=0x1734270 00:32:17.555 [2024-04-17 08:28:50.824864] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:17.555 [2024-04-17 08:28:50.824867] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:17.555 [2024-04-17 08:28:50.824870] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1734270) 00:32:17.555 [2024-04-17 08:28:50.824876] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.555 [2024-04-17 08:28:50.824887] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1773af0, cid 3, qid 0 00:32:17.555 [2024-04-17 08:28:50.824931] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:17.555 [2024-04-17 08:28:50.824936] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:17.555 [2024-04-17 08:28:50.824939] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:17.555 [2024-04-17 08:28:50.824942] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1773af0) on tqpair=0x1734270 00:32:17.555 [2024-04-17 08:28:50.824950] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:17.555 [2024-04-17 08:28:50.824953] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:17.555 [2024-04-17 08:28:50.824955] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1734270) 00:32:17.555 [2024-04-17 08:28:50.824961] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.555 [2024-04-17 08:28:50.824972] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1773af0, cid 3, qid 0 00:32:17.556 [2024-04-17 08:28:50.825016] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:17.556 [2024-04-17 08:28:50.825022] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:17.556 [2024-04-17 08:28:50.825024] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:17.556 [2024-04-17 08:28:50.825027] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1773af0) on tqpair=0x1734270 00:32:17.556 [2024-04-17 08:28:50.825035] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:17.556 [2024-04-17 08:28:50.825038] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:17.556 [2024-04-17 08:28:50.825041] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1734270) 00:32:17.556 [2024-04-17 08:28:50.825046] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.556 [2024-04-17 08:28:50.825058] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1773af0, cid 3, qid 0 00:32:17.556 [2024-04-17 08:28:50.825103] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:17.556 [2024-04-17 08:28:50.825109] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:17.556 [2024-04-17 08:28:50.825111] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:17.556 [2024-04-17 08:28:50.825114] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1773af0) on tqpair=0x1734270 00:32:17.556 [2024-04-17 08:28:50.825122] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:17.556 [2024-04-17 08:28:50.825125] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:17.556 [2024-04-17 08:28:50.825128] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1734270) 00:32:17.556 [2024-04-17 08:28:50.825134] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.556 [2024-04-17 08:28:50.825145] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1773af0, cid 3, qid 0 00:32:17.556 [2024-04-17 08:28:50.825191] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:17.556 [2024-04-17 08:28:50.825196] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:17.556 [2024-04-17 08:28:50.825198] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:17.556 [2024-04-17 08:28:50.825201] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1773af0) on tqpair=0x1734270 00:32:17.556 [2024-04-17 08:28:50.825209] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:17.556 [2024-04-17 08:28:50.825212] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:17.556 [2024-04-17 08:28:50.825215] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1734270) 00:32:17.556 [2024-04-17 08:28:50.825221] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.556 [2024-04-17 08:28:50.825232] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1773af0, cid 3, qid 0 00:32:17.556 [2024-04-17 08:28:50.825280] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:17.556 [2024-04-17 08:28:50.825286] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:17.556 [2024-04-17 08:28:50.825288] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:17.556 [2024-04-17 08:28:50.825291] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1773af0) on tqpair=0x1734270 00:32:17.556 [2024-04-17 08:28:50.825300] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:17.556 [2024-04-17 08:28:50.829321] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:17.556 [2024-04-17 08:28:50.829325] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1734270) 00:32:17.556 [2024-04-17 08:28:50.829332] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.556 [2024-04-17 08:28:50.829363] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1773af0, cid 3, qid 0 00:32:17.556 [2024-04-17 08:28:50.829413] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:17.556 [2024-04-17 08:28:50.829418] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:17.556 [2024-04-17 08:28:50.829421] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:17.556 [2024-04-17 08:28:50.829424] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1773af0) on tqpair=0x1734270 00:32:17.556 [2024-04-17 08:28:50.829431] nvme_ctrlr.c:1191:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:32:17.556 00:32:17.556 08:28:50 -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:32:17.556 [2024-04-17 08:28:50.871270] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:32:17.556 [2024-04-17 08:28:50.871332] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68552 ] 00:32:17.818 [2024-04-17 08:28:50.999583] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:32:17.818 [2024-04-17 08:28:50.999643] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:32:17.818 [2024-04-17 08:28:50.999647] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:32:17.818 [2024-04-17 08:28:50.999658] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:32:17.818 [2024-04-17 08:28:50.999666] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl uring 00:32:17.818 [2024-04-17 08:28:50.999772] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:32:17.818 [2024-04-17 08:28:50.999806] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1c9f270 0 00:32:17.818 [2024-04-17 08:28:51.005314] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:32:17.818 [2024-04-17 08:28:51.005330] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:32:17.818 [2024-04-17 08:28:51.005334] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:32:17.818 [2024-04-17 08:28:51.005337] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:32:17.818 [2024-04-17 08:28:51.005374] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:17.818 [2024-04-17 08:28:51.005379] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:17.818 [2024-04-17 08:28:51.005382] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c9f270) 00:32:17.818 [2024-04-17 08:28:51.005392] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:32:17.818 [2024-04-17 08:28:51.005411] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cde6d0, cid 0, qid 0 00:32:17.818 [2024-04-17 08:28:51.013331] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:17.818 [2024-04-17 08:28:51.013346] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:17.818 [2024-04-17 08:28:51.013349] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:17.818 [2024-04-17 08:28:51.013353] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cde6d0) on tqpair=0x1c9f270 00:32:17.818 [2024-04-17 08:28:51.013363] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:32:17.818 [2024-04-17 08:28:51.013368] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:32:17.818 [2024-04-17 08:28:51.013373] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:32:17.818 [2024-04-17 08:28:51.013387] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:17.818 [2024-04-17 08:28:51.013390] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:17.818 [2024-04-17 08:28:51.013392] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c9f270) 00:32:17.818 [2024-04-17 08:28:51.013399] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.818 [2024-04-17 08:28:51.013415] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cde6d0, cid 0, qid 0 00:32:17.818 [2024-04-17 08:28:51.013462] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:17.818 [2024-04-17 08:28:51.013467] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:17.818 [2024-04-17 08:28:51.013469] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:17.818 [2024-04-17 08:28:51.013472] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cde6d0) on tqpair=0x1c9f270 00:32:17.818 [2024-04-17 08:28:51.013480] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:32:17.818 [2024-04-17 08:28:51.013486] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:32:17.818 [2024-04-17 08:28:51.013492] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:17.818 [2024-04-17 08:28:51.013494] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:17.818 [2024-04-17 08:28:51.013497] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c9f270) 00:32:17.818 [2024-04-17 08:28:51.013502] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.819 [2024-04-17 08:28:51.013514] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cde6d0, cid 0, qid 0 00:32:17.819 [2024-04-17 08:28:51.013552] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:17.819 [2024-04-17 08:28:51.013557] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:17.819 [2024-04-17 08:28:51.013560] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:17.819 [2024-04-17 08:28:51.013563] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cde6d0) on tqpair=0x1c9f270 00:32:17.819 [2024-04-17 08:28:51.013568] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:32:17.819 [2024-04-17 08:28:51.013573] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:32:17.819 [2024-04-17 08:28:51.013578] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:17.819 [2024-04-17 08:28:51.013581] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:17.819 [2024-04-17 08:28:51.013583] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c9f270) 00:32:17.819 [2024-04-17 08:28:51.013588] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.819 [2024-04-17 08:28:51.013599] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cde6d0, cid 0, qid 0 00:32:17.819 [2024-04-17 08:28:51.013645] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:17.819 [2024-04-17 08:28:51.013654] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:17.819 [2024-04-17 08:28:51.013657] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:17.819 [2024-04-17 08:28:51.013660] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cde6d0) on tqpair=0x1c9f270 00:32:17.819 [2024-04-17 08:28:51.013664] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:32:17.819 [2024-04-17 08:28:51.013671] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:17.819 [2024-04-17 08:28:51.013674] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:17.819 [2024-04-17 08:28:51.013677] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c9f270) 00:32:17.819 [2024-04-17 08:28:51.013682] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.819 [2024-04-17 08:28:51.013693] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cde6d0, cid 0, qid 0 00:32:17.819 [2024-04-17 08:28:51.013760] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:17.819 [2024-04-17 08:28:51.013777] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:17.819 [2024-04-17 08:28:51.013779] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:17.819 [2024-04-17 08:28:51.013782] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cde6d0) on tqpair=0x1c9f270 00:32:17.819 [2024-04-17 08:28:51.013786] nvme_ctrlr.c:3736:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:32:17.819 [2024-04-17 08:28:51.013790] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:32:17.819 [2024-04-17 08:28:51.013795] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:32:17.819 [2024-04-17 08:28:51.013899] nvme_ctrlr.c:3929:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:32:17.819 [2024-04-17 08:28:51.013907] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:32:17.819 [2024-04-17 08:28:51.013915] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:17.819 [2024-04-17 08:28:51.013917] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:17.819 [2024-04-17 08:28:51.013920] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c9f270) 00:32:17.819 [2024-04-17 08:28:51.013925] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.819 [2024-04-17 08:28:51.013937] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cde6d0, cid 0, qid 0 00:32:17.819 [2024-04-17 08:28:51.013986] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:17.819 [2024-04-17 08:28:51.013991] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:17.819 [2024-04-17 08:28:51.013993] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:17.819 [2024-04-17 08:28:51.013996] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cde6d0) on tqpair=0x1c9f270 00:32:17.819 [2024-04-17 08:28:51.014000] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:32:17.819 [2024-04-17 08:28:51.014007] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:17.819 [2024-04-17 08:28:51.014010] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:17.819 [2024-04-17 08:28:51.014012] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c9f270) 00:32:17.819 [2024-04-17 08:28:51.014018] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.819 [2024-04-17 08:28:51.014028] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cde6d0, cid 0, qid 0 00:32:17.819 [2024-04-17 08:28:51.014076] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:17.819 [2024-04-17 08:28:51.014081] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:17.819 [2024-04-17 08:28:51.014083] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:17.819 [2024-04-17 08:28:51.014086] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cde6d0) on tqpair=0x1c9f270 00:32:17.819 [2024-04-17 08:28:51.014090] nvme_ctrlr.c:3771:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:32:17.819 [2024-04-17 08:28:51.014093] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:32:17.819 [2024-04-17 08:28:51.014099] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:32:17.819 [2024-04-17 08:28:51.014106] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:32:17.819 [2024-04-17 08:28:51.014113] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:17.819 [2024-04-17 08:28:51.014116] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:17.819 [2024-04-17 08:28:51.014119] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c9f270) 00:32:17.819 [2024-04-17 08:28:51.014124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.819 [2024-04-17 08:28:51.014136] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cde6d0, cid 0, qid 0 00:32:17.819 [2024-04-17 08:28:51.014239] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:17.819 [2024-04-17 08:28:51.014247] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:17.819 [2024-04-17 08:28:51.014250] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:17.819 [2024-04-17 08:28:51.014253] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c9f270): datao=0, datal=4096, cccid=0 00:32:17.819 [2024-04-17 08:28:51.014256] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cde6d0) on tqpair(0x1c9f270): expected_datao=0, payload_size=4096 00:32:17.819 [2024-04-17 08:28:51.014263] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:17.819 [2024-04-17 08:28:51.014266] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:17.819 [2024-04-17 08:28:51.014273] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:17.819 [2024-04-17 08:28:51.014277] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:17.819 [2024-04-17 08:28:51.014280] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:17.819 [2024-04-17 08:28:51.014282] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cde6d0) on tqpair=0x1c9f270 00:32:17.819 [2024-04-17 08:28:51.014289] nvme_ctrlr.c:1971:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:32:17.819 [2024-04-17 08:28:51.014295] nvme_ctrlr.c:1975:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:32:17.819 [2024-04-17 08:28:51.014298] nvme_ctrlr.c:1978:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:32:17.819 [2024-04-17 08:28:51.014301] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:32:17.819 [2024-04-17 08:28:51.014314] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:32:17.819 [2024-04-17 08:28:51.014317] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:32:17.819 [2024-04-17 08:28:51.014323] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:32:17.819 [2024-04-17 08:28:51.014328] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:17.819 [2024-04-17 08:28:51.014331] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:17.819 [2024-04-17 08:28:51.014334] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c9f270) 00:32:17.819 [2024-04-17 08:28:51.014340] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:17.819 [2024-04-17 08:28:51.014353] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cde6d0, cid 0, qid 0 00:32:17.819 [2024-04-17 08:28:51.014400] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:17.819 [2024-04-17 08:28:51.014405] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:17.819 [2024-04-17 08:28:51.014407] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:17.819 [2024-04-17 08:28:51.014410] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cde6d0) on tqpair=0x1c9f270 00:32:17.819 [2024-04-17 08:28:51.014417] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:17.819 [2024-04-17 08:28:51.014420] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:17.819 [2024-04-17 08:28:51.014422] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c9f270) 00:32:17.819 [2024-04-17 08:28:51.014427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:17.819 [2024-04-17 08:28:51.014431] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:17.819 [2024-04-17 08:28:51.014434] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:17.819 [2024-04-17 08:28:51.014436] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1c9f270) 00:32:17.819 [2024-04-17 08:28:51.014441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:17.819 [2024-04-17 08:28:51.014445] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:17.819 [2024-04-17 08:28:51.014448] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:17.819 [2024-04-17 08:28:51.014450] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1c9f270) 00:32:17.819 [2024-04-17 08:28:51.014455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:17.819 [2024-04-17 08:28:51.014459] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:17.819 [2024-04-17 08:28:51.014462] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:17.819 [2024-04-17 08:28:51.014464] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c9f270) 00:32:17.820 [2024-04-17 08:28:51.014468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:17.820 [2024-04-17 08:28:51.014472] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:32:17.820 [2024-04-17 08:28:51.014481] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:32:17.820 [2024-04-17 08:28:51.014486] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:17.820 [2024-04-17 08:28:51.014488] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:17.820 [2024-04-17 08:28:51.014491] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c9f270) 00:32:17.820 [2024-04-17 08:28:51.014496] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.820 [2024-04-17 08:28:51.014509] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cde6d0, cid 0, qid 0 00:32:17.820 [2024-04-17 08:28:51.014513] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cde830, cid 1, qid 0 00:32:17.820 [2024-04-17 08:28:51.014517] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cde990, cid 2, qid 0 00:32:17.820 [2024-04-17 08:28:51.014521] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cdeaf0, cid 3, qid 0 00:32:17.820 [2024-04-17 08:28:51.014524] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cdec50, cid 4, qid 0 00:32:17.820 [2024-04-17 08:28:51.014623] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:17.820 [2024-04-17 08:28:51.014631] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:17.820 [2024-04-17 08:28:51.014634] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:17.820 [2024-04-17 08:28:51.014637] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cdec50) on tqpair=0x1c9f270 00:32:17.820 [2024-04-17 08:28:51.014641] nvme_ctrlr.c:2889:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:32:17.820 [2024-04-17 08:28:51.014645] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:32:17.820 [2024-04-17 08:28:51.014651] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:32:17.820 [2024-04-17 08:28:51.014656] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:32:17.820 [2024-04-17 08:28:51.014660] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:17.820 [2024-04-17 08:28:51.014663] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:17.820 [2024-04-17 08:28:51.014666] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c9f270) 00:32:17.820 [2024-04-17 08:28:51.014671] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:17.820 [2024-04-17 08:28:51.014682] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cdec50, cid 4, qid 0 00:32:17.820 [2024-04-17 08:28:51.014737] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:17.820 [2024-04-17 08:28:51.014742] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:17.820 [2024-04-17 08:28:51.014744] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:17.820 [2024-04-17 08:28:51.014747] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cdec50) on tqpair=0x1c9f270 00:32:17.820 [2024-04-17 08:28:51.014791] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:32:17.820 [2024-04-17 08:28:51.014801] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:32:17.820 [2024-04-17 08:28:51.014806] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:17.820 [2024-04-17 08:28:51.014809] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:17.820 [2024-04-17 08:28:51.014812] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c9f270) 00:32:17.820 [2024-04-17 08:28:51.014817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.820 [2024-04-17 08:28:51.014829] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cdec50, cid 4, qid 0 00:32:17.820 [2024-04-17 08:28:51.014889] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:17.820 [2024-04-17 08:28:51.014895] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:17.820 [2024-04-17 08:28:51.014898] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:17.820 [2024-04-17 08:28:51.014900] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c9f270): datao=0, datal=4096, cccid=4 00:32:17.820 [2024-04-17 08:28:51.014903] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cdec50) on tqpair(0x1c9f270): expected_datao=0, payload_size=4096 00:32:17.820 [2024-04-17 08:28:51.014909] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:17.820 [2024-04-17 08:28:51.014912] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:17.820 [2024-04-17 08:28:51.014924] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:17.820 [2024-04-17 08:28:51.014928] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:17.820 [2024-04-17 08:28:51.014931] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:17.820 [2024-04-17 08:28:51.014934] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cdec50) on tqpair=0x1c9f270 00:32:17.820 [2024-04-17 08:28:51.014947] nvme_ctrlr.c:4542:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:32:17.820 [2024-04-17 08:28:51.014957] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:32:17.820 [2024-04-17 08:28:51.014964] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:32:17.820 [2024-04-17 08:28:51.014970] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:17.820 [2024-04-17 08:28:51.014973] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:17.820 [2024-04-17 08:28:51.014975] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c9f270) 00:32:17.820 [2024-04-17 08:28:51.014980] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.820 [2024-04-17 08:28:51.014992] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cdec50, cid 4, qid 0 00:32:17.820 [2024-04-17 08:28:51.015058] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:17.820 [2024-04-17 08:28:51.015063] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:17.820 [2024-04-17 08:28:51.015065] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:17.820 [2024-04-17 08:28:51.015068] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c9f270): datao=0, datal=4096, cccid=4 00:32:17.820 [2024-04-17 08:28:51.015071] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cdec50) on tqpair(0x1c9f270): expected_datao=0, payload_size=4096 00:32:17.820 [2024-04-17 08:28:51.015077] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:17.820 [2024-04-17 08:28:51.015080] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:17.820 [2024-04-17 08:28:51.015086] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:17.820 [2024-04-17 08:28:51.015090] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:17.820 [2024-04-17 08:28:51.015093] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:17.820 [2024-04-17 08:28:51.015095] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cdec50) on tqpair=0x1c9f270 00:32:17.820 [2024-04-17 08:28:51.015107] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:32:17.820 [2024-04-17 08:28:51.015113] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:32:17.820 [2024-04-17 08:28:51.015119] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:17.820 [2024-04-17 08:28:51.015121] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:17.820 [2024-04-17 08:28:51.015124] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c9f270) 00:32:17.820 [2024-04-17 08:28:51.015129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.820 [2024-04-17 08:28:51.015141] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cdec50, cid 4, qid 0 00:32:17.820 [2024-04-17 08:28:51.015208] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:17.820 [2024-04-17 08:28:51.015217] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:17.820 [2024-04-17 08:28:51.015220] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:17.820 [2024-04-17 08:28:51.015223] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c9f270): datao=0, datal=4096, cccid=4 00:32:17.820 [2024-04-17 08:28:51.015226] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cdec50) on tqpair(0x1c9f270): expected_datao=0, payload_size=4096 00:32:17.820 [2024-04-17 08:28:51.015231] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:17.820 [2024-04-17 08:28:51.015234] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:17.820 [2024-04-17 08:28:51.015240] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:17.820 [2024-04-17 08:28:51.015244] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:17.820 [2024-04-17 08:28:51.015247] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:17.820 [2024-04-17 08:28:51.015250] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cdec50) on tqpair=0x1c9f270 00:32:17.820 [2024-04-17 08:28:51.015256] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:32:17.820 [2024-04-17 08:28:51.015262] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:32:17.820 [2024-04-17 08:28:51.015269] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:32:17.820 [2024-04-17 08:28:51.015273] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:32:17.820 [2024-04-17 08:28:51.015277] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:32:17.820 [2024-04-17 08:28:51.015280] nvme_ctrlr.c:2977:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:32:17.820 [2024-04-17 08:28:51.015284] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:32:17.820 [2024-04-17 08:28:51.015288] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:32:17.820 [2024-04-17 08:28:51.015299] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:17.820 [2024-04-17 08:28:51.015302] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:17.820 [2024-04-17 08:28:51.015313] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c9f270) 00:32:17.820 [2024-04-17 08:28:51.015319] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.820 [2024-04-17 08:28:51.015324] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:17.821 [2024-04-17 08:28:51.015327] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:17.821 [2024-04-17 08:28:51.015329] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c9f270) 00:32:17.821 [2024-04-17 08:28:51.015334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:32:17.821 [2024-04-17 08:28:51.015350] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cdec50, cid 4, qid 0 00:32:17.821 [2024-04-17 08:28:51.015355] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cdedb0, cid 5, qid 0 00:32:17.821 [2024-04-17 08:28:51.015417] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:17.821 [2024-04-17 08:28:51.015422] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:17.821 [2024-04-17 08:28:51.015424] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:17.821 [2024-04-17 08:28:51.015427] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cdec50) on tqpair=0x1c9f270 00:32:17.821 [2024-04-17 08:28:51.015433] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:17.821 [2024-04-17 08:28:51.015437] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:17.821 [2024-04-17 08:28:51.015440] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:17.821 [2024-04-17 08:28:51.015442] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cdedb0) on tqpair=0x1c9f270 00:32:17.821 [2024-04-17 08:28:51.015449] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:17.821 [2024-04-17 08:28:51.015452] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:17.821 [2024-04-17 08:28:51.015455] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c9f270) 00:32:17.821 [2024-04-17 08:28:51.015460] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.821 [2024-04-17 08:28:51.015471] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cdedb0, cid 5, qid 0 00:32:17.821 [2024-04-17 08:28:51.015529] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:17.821 [2024-04-17 08:28:51.015534] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:17.821 [2024-04-17 08:28:51.015536] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:17.821 [2024-04-17 08:28:51.015539] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cdedb0) on tqpair=0x1c9f270 00:32:17.821 [2024-04-17 08:28:51.015546] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:17.821 [2024-04-17 08:28:51.015549] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:17.821 [2024-04-17 08:28:51.015552] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c9f270) 00:32:17.821 [2024-04-17 08:28:51.015557] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.821 [2024-04-17 08:28:51.015567] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cdedb0, cid 5, qid 0 00:32:17.821 [2024-04-17 08:28:51.015616] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:17.821 [2024-04-17 08:28:51.015620] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:17.821 [2024-04-17 08:28:51.015623] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:17.821 [2024-04-17 08:28:51.015625] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cdedb0) on tqpair=0x1c9f270 00:32:17.821 [2024-04-17 08:28:51.015632] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:17.821 [2024-04-17 08:28:51.015636] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:17.821 [2024-04-17 08:28:51.015638] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c9f270) 00:32:17.821 [2024-04-17 08:28:51.015643] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.821 [2024-04-17 08:28:51.015654] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cdedb0, cid 5, qid 0 00:32:17.821 [2024-04-17 08:28:51.015700] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:17.821 [2024-04-17 08:28:51.015704] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:17.821 [2024-04-17 08:28:51.015707] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:17.821 [2024-04-17 08:28:51.015709] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cdedb0) on tqpair=0x1c9f270 00:32:17.821 [2024-04-17 08:28:51.015719] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:17.821 [2024-04-17 08:28:51.015722] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:17.821 [2024-04-17 08:28:51.015724] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c9f270) 00:32:17.821 [2024-04-17 08:28:51.015730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.821 [2024-04-17 08:28:51.015735] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:17.821 [2024-04-17 08:28:51.015738] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:17.821 [2024-04-17 08:28:51.015740] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c9f270) 00:32:17.821 [2024-04-17 08:28:51.015745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.821 [2024-04-17 08:28:51.015750] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:17.821 [2024-04-17 08:28:51.015753] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:17.821 [2024-04-17 08:28:51.015756] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1c9f270) 00:32:17.821 [2024-04-17 08:28:51.015760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.821 [2024-04-17 08:28:51.015766] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:17.821 [2024-04-17 08:28:51.015769] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:17.821 [2024-04-17 08:28:51.015771] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1c9f270) 00:32:17.821 [2024-04-17 08:28:51.015776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.821 [2024-04-17 08:28:51.015788] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cdedb0, cid 5, qid 0 00:32:17.821 [2024-04-17 08:28:51.015792] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cdec50, cid 4, qid 0 00:32:17.821 [2024-04-17 08:28:51.015796] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cdef10, cid 6, qid 0 00:32:17.821 [2024-04-17 08:28:51.015799] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cdf070, cid 7, qid 0 00:32:17.821 [2024-04-17 08:28:51.015946] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:17.821 [2024-04-17 08:28:51.015958] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:17.821 [2024-04-17 08:28:51.015961] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:17.821 [2024-04-17 08:28:51.015964] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c9f270): datao=0, datal=8192, cccid=5 00:32:17.821 [2024-04-17 08:28:51.015967] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cdedb0) on tqpair(0x1c9f270): expected_datao=0, payload_size=8192 00:32:17.821 [2024-04-17 08:28:51.015979] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:17.821 [2024-04-17 08:28:51.015983] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:17.821 [2024-04-17 08:28:51.015987] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:17.821 [2024-04-17 08:28:51.015992] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:17.821 [2024-04-17 08:28:51.015994] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:17.821 [2024-04-17 08:28:51.015997] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c9f270): datao=0, datal=512, cccid=4 00:32:17.821 [2024-04-17 08:28:51.016000] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cdec50) on tqpair(0x1c9f270): expected_datao=0, payload_size=512 00:32:17.821 [2024-04-17 08:28:51.016005] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:17.821 [2024-04-17 08:28:51.016008] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:17.821 [2024-04-17 08:28:51.016012] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:17.821 [2024-04-17 08:28:51.016017] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:17.821 [2024-04-17 08:28:51.016019] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:17.821 [2024-04-17 08:28:51.016021] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c9f270): datao=0, datal=512, cccid=6 00:32:17.821 [2024-04-17 08:28:51.016024] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cdef10) on tqpair(0x1c9f270): expected_datao=0, payload_size=512 00:32:17.821 [2024-04-17 08:28:51.016030] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:17.821 [2024-04-17 08:28:51.016032] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:17.821 [2024-04-17 08:28:51.016036] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:17.821 [2024-04-17 08:28:51.016041] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:17.821 [2024-04-17 08:28:51.016043] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:17.821 [2024-04-17 08:28:51.016046] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c9f270): datao=0, datal=4096, cccid=7 00:32:17.821 [2024-04-17 08:28:51.016049] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cdf070) on tqpair(0x1c9f270): expected_datao=0, payload_size=4096 00:32:17.821 [2024-04-17 08:28:51.016055] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:17.821 [2024-04-17 08:28:51.016058] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:17.821 [2024-04-17 08:28:51.016062] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:17.821 [2024-04-17 08:28:51.016067] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:17.821 [2024-04-17 08:28:51.016069] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:17.821 [2024-04-17 08:28:51.016072] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cdedb0) on tqpair=0x1c9f270 00:32:17.821 [2024-04-17 08:28:51.016084] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:17.821 [2024-04-17 08:28:51.016089] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:17.821 [2024-04-17 08:28:51.016091] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:17.821 [2024-04-17 08:28:51.016094] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cdec50) on tqpair=0x1c9f270 00:32:17.821 [2024-04-17 08:28:51.016102] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:17.821 [2024-04-17 08:28:51.016107] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:17.821 [2024-04-17 08:28:51.016110] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:17.821 [2024-04-17 08:28:51.016112] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cdef10) on tqpair=0x1c9f270 00:32:17.821 [2024-04-17 08:28:51.016118] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:17.821 [2024-04-17 08:28:51.016123] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:17.821 [2024-04-17 08:28:51.016125] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:17.821 [2024-04-17 08:28:51.016128] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cdf070) on tqpair=0x1c9f270 00:32:17.821 ===================================================== 00:32:17.821 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:17.821 ===================================================== 00:32:17.821 Controller Capabilities/Features 00:32:17.821 ================================ 00:32:17.822 Vendor ID: 8086 00:32:17.822 Subsystem Vendor ID: 8086 00:32:17.822 Serial Number: SPDK00000000000001 00:32:17.822 Model Number: SPDK bdev Controller 00:32:17.822 Firmware Version: 24.01.1 00:32:17.822 Recommended Arb Burst: 6 00:32:17.822 IEEE OUI Identifier: e4 d2 5c 00:32:17.822 Multi-path I/O 00:32:17.822 May have multiple subsystem ports: Yes 00:32:17.822 May have multiple controllers: Yes 00:32:17.822 Associated with SR-IOV VF: No 00:32:17.822 Max Data Transfer Size: 131072 00:32:17.822 Max Number of Namespaces: 32 00:32:17.822 Max Number of I/O Queues: 127 00:32:17.822 NVMe Specification Version (VS): 1.3 00:32:17.822 NVMe Specification Version (Identify): 1.3 00:32:17.822 Maximum Queue Entries: 128 00:32:17.822 Contiguous Queues Required: Yes 00:32:17.822 Arbitration Mechanisms Supported 00:32:17.822 Weighted Round Robin: Not Supported 00:32:17.822 Vendor Specific: Not Supported 00:32:17.822 Reset Timeout: 15000 ms 00:32:17.822 Doorbell Stride: 4 bytes 00:32:17.822 NVM Subsystem Reset: Not Supported 00:32:17.822 Command Sets Supported 00:32:17.822 NVM Command Set: Supported 00:32:17.822 Boot Partition: Not Supported 00:32:17.822 Memory Page Size Minimum: 4096 bytes 00:32:17.822 Memory Page Size Maximum: 4096 bytes 00:32:17.822 Persistent Memory Region: Not Supported 00:32:17.822 Optional Asynchronous Events Supported 00:32:17.822 Namespace Attribute Notices: Supported 00:32:17.822 Firmware Activation Notices: Not Supported 00:32:17.822 ANA Change Notices: Not Supported 00:32:17.822 PLE Aggregate Log Change Notices: Not Supported 00:32:17.822 LBA Status Info Alert Notices: Not Supported 00:32:17.822 EGE Aggregate Log Change Notices: Not Supported 00:32:17.822 Normal NVM Subsystem Shutdown event: Not Supported 00:32:17.822 Zone Descriptor Change Notices: Not Supported 00:32:17.822 Discovery Log Change Notices: Not Supported 00:32:17.822 Controller Attributes 00:32:17.822 128-bit Host Identifier: Supported 00:32:17.822 Non-Operational Permissive Mode: Not Supported 00:32:17.822 NVM Sets: Not Supported 00:32:17.822 Read Recovery Levels: Not Supported 00:32:17.822 Endurance Groups: Not Supported 00:32:17.822 Predictable Latency Mode: Not Supported 00:32:17.822 Traffic Based Keep ALive: Not Supported 00:32:17.822 Namespace Granularity: Not Supported 00:32:17.822 SQ Associations: Not Supported 00:32:17.822 UUID List: Not Supported 00:32:17.822 Multi-Domain Subsystem: Not Supported 00:32:17.822 Fixed Capacity Management: Not Supported 00:32:17.822 Variable Capacity Management: Not Supported 00:32:17.822 Delete Endurance Group: Not Supported 00:32:17.822 Delete NVM Set: Not Supported 00:32:17.822 Extended LBA Formats Supported: Not Supported 00:32:17.822 Flexible Data Placement Supported: Not Supported 00:32:17.822 00:32:17.822 Controller Memory Buffer Support 00:32:17.822 ================================ 00:32:17.822 Supported: No 00:32:17.822 00:32:17.822 Persistent Memory Region Support 00:32:17.822 ================================ 00:32:17.822 Supported: No 00:32:17.822 00:32:17.822 Admin Command Set Attributes 00:32:17.822 ============================ 00:32:17.822 Security Send/Receive: Not Supported 00:32:17.822 Format NVM: Not Supported 00:32:17.822 Firmware Activate/Download: Not Supported 00:32:17.822 Namespace Management: Not Supported 00:32:17.822 Device Self-Test: Not Supported 00:32:17.822 Directives: Not Supported 00:32:17.822 NVMe-MI: Not Supported 00:32:17.822 Virtualization Management: Not Supported 00:32:17.822 Doorbell Buffer Config: Not Supported 00:32:17.822 Get LBA Status Capability: Not Supported 00:32:17.822 Command & Feature Lockdown Capability: Not Supported 00:32:17.822 Abort Command Limit: 4 00:32:17.822 Async Event Request Limit: 4 00:32:17.822 Number of Firmware Slots: N/A 00:32:17.822 Firmware Slot 1 Read-Only: N/A 00:32:17.822 Firmware Activation Without Reset: N/A 00:32:17.822 Multiple Update Detection Support: N/A 00:32:17.822 Firmware Update Granularity: No Information Provided 00:32:17.822 Per-Namespace SMART Log: No 00:32:17.822 Asymmetric Namespace Access Log Page: Not Supported 00:32:17.822 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:32:17.822 Command Effects Log Page: Supported 00:32:17.822 Get Log Page Extended Data: Supported 00:32:17.822 Telemetry Log Pages: Not Supported 00:32:17.822 Persistent Event Log Pages: Not Supported 00:32:17.822 Supported Log Pages Log Page: May Support 00:32:17.822 Commands Supported & Effects Log Page: Not Supported 00:32:17.822 Feature Identifiers & Effects Log Page:May Support 00:32:17.822 NVMe-MI Commands & Effects Log Page: May Support 00:32:17.822 Data Area 4 for Telemetry Log: Not Supported 00:32:17.822 Error Log Page Entries Supported: 128 00:32:17.822 Keep Alive: Supported 00:32:17.822 Keep Alive Granularity: 10000 ms 00:32:17.822 00:32:17.822 NVM Command Set Attributes 00:32:17.822 ========================== 00:32:17.822 Submission Queue Entry Size 00:32:17.822 Max: 64 00:32:17.822 Min: 64 00:32:17.822 Completion Queue Entry Size 00:32:17.822 Max: 16 00:32:17.822 Min: 16 00:32:17.822 Number of Namespaces: 32 00:32:17.822 Compare Command: Supported 00:32:17.822 Write Uncorrectable Command: Not Supported 00:32:17.822 Dataset Management Command: Supported 00:32:17.822 Write Zeroes Command: Supported 00:32:17.822 Set Features Save Field: Not Supported 00:32:17.822 Reservations: Supported 00:32:17.822 Timestamp: Not Supported 00:32:17.822 Copy: Supported 00:32:17.822 Volatile Write Cache: Present 00:32:17.822 Atomic Write Unit (Normal): 1 00:32:17.822 Atomic Write Unit (PFail): 1 00:32:17.822 Atomic Compare & Write Unit: 1 00:32:17.822 Fused Compare & Write: Supported 00:32:17.822 Scatter-Gather List 00:32:17.822 SGL Command Set: Supported 00:32:17.822 SGL Keyed: Supported 00:32:17.822 SGL Bit Bucket Descriptor: Not Supported 00:32:17.822 SGL Metadata Pointer: Not Supported 00:32:17.822 Oversized SGL: Not Supported 00:32:17.822 SGL Metadata Address: Not Supported 00:32:17.822 SGL Offset: Supported 00:32:17.822 Transport SGL Data Block: Not Supported 00:32:17.822 Replay Protected Memory Block: Not Supported 00:32:17.822 00:32:17.822 Firmware Slot Information 00:32:17.822 ========================= 00:32:17.822 Active slot: 1 00:32:17.822 Slot 1 Firmware Revision: 24.01.1 00:32:17.822 00:32:17.822 00:32:17.822 Commands Supported and Effects 00:32:17.822 ============================== 00:32:17.822 Admin Commands 00:32:17.822 -------------- 00:32:17.822 Get Log Page (02h): Supported 00:32:17.822 Identify (06h): Supported 00:32:17.822 Abort (08h): Supported 00:32:17.822 Set Features (09h): Supported 00:32:17.822 Get Features (0Ah): Supported 00:32:17.822 Asynchronous Event Request (0Ch): Supported 00:32:17.822 Keep Alive (18h): Supported 00:32:17.822 I/O Commands 00:32:17.822 ------------ 00:32:17.822 Flush (00h): Supported LBA-Change 00:32:17.822 Write (01h): Supported LBA-Change 00:32:17.822 Read (02h): Supported 00:32:17.822 Compare (05h): Supported 00:32:17.822 Write Zeroes (08h): Supported LBA-Change 00:32:17.822 Dataset Management (09h): Supported LBA-Change 00:32:17.822 Copy (19h): Supported LBA-Change 00:32:17.822 Unknown (79h): Supported LBA-Change 00:32:17.822 Unknown (7Ah): Supported 00:32:17.822 00:32:17.822 Error Log 00:32:17.822 ========= 00:32:17.822 00:32:17.822 Arbitration 00:32:17.822 =========== 00:32:17.822 Arbitration Burst: 1 00:32:17.822 00:32:17.822 Power Management 00:32:17.822 ================ 00:32:17.822 Number of Power States: 1 00:32:17.822 Current Power State: Power State #0 00:32:17.822 Power State #0: 00:32:17.822 Max Power: 0.00 W 00:32:17.822 Non-Operational State: Operational 00:32:17.822 Entry Latency: Not Reported 00:32:17.822 Exit Latency: Not Reported 00:32:17.822 Relative Read Throughput: 0 00:32:17.822 Relative Read Latency: 0 00:32:17.822 Relative Write Throughput: 0 00:32:17.822 Relative Write Latency: 0 00:32:17.822 Idle Power: Not Reported 00:32:17.822 Active Power: Not Reported 00:32:17.822 Non-Operational Permissive Mode: Not Supported 00:32:17.822 00:32:17.822 Health Information 00:32:17.822 ================== 00:32:17.822 Critical Warnings: 00:32:17.822 Available Spare Space: OK 00:32:17.822 Temperature: OK 00:32:17.822 Device Reliability: OK 00:32:17.822 Read Only: No 00:32:17.822 Volatile Memory Backup: OK 00:32:17.822 Current Temperature: 0 Kelvin (-273 Celsius) 00:32:17.822 Temperature Threshold: [2024-04-17 08:28:51.016222] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:17.822 [2024-04-17 08:28:51.016226] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:17.822 [2024-04-17 08:28:51.016229] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1c9f270) 00:32:17.823 [2024-04-17 08:28:51.016235] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.823 [2024-04-17 08:28:51.016249] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cdf070, cid 7, qid 0 00:32:17.823 [2024-04-17 08:28:51.016297] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:17.823 [2024-04-17 08:28:51.016302] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:17.823 [2024-04-17 08:28:51.016313] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:17.823 [2024-04-17 08:28:51.016316] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cdf070) on tqpair=0x1c9f270 00:32:17.823 [2024-04-17 08:28:51.016345] nvme_ctrlr.c:4206:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:32:17.823 [2024-04-17 08:28:51.016355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.823 [2024-04-17 08:28:51.016361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.823 [2024-04-17 08:28:51.016365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.823 [2024-04-17 08:28:51.016370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.823 [2024-04-17 08:28:51.016376] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:17.823 [2024-04-17 08:28:51.016379] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:17.823 [2024-04-17 08:28:51.016382] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c9f270) 00:32:17.823 [2024-04-17 08:28:51.016388] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.823 [2024-04-17 08:28:51.016402] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cdeaf0, cid 3, qid 0 00:32:17.823 [2024-04-17 08:28:51.016446] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:17.823 [2024-04-17 08:28:51.016450] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:17.823 [2024-04-17 08:28:51.016453] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:17.823 [2024-04-17 08:28:51.016455] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cdeaf0) on tqpair=0x1c9f270 00:32:17.823 [2024-04-17 08:28:51.016461] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:17.823 [2024-04-17 08:28:51.016464] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:17.823 [2024-04-17 08:28:51.016467] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c9f270) 00:32:17.823 [2024-04-17 08:28:51.016472] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.823 [2024-04-17 08:28:51.016486] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cdeaf0, cid 3, qid 0 00:32:17.823 [2024-04-17 08:28:51.016558] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:17.823 [2024-04-17 08:28:51.016566] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:17.823 [2024-04-17 08:28:51.016569] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:17.823 [2024-04-17 08:28:51.016572] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cdeaf0) on tqpair=0x1c9f270 00:32:17.823 [2024-04-17 08:28:51.016576] nvme_ctrlr.c:1069:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:32:17.823 [2024-04-17 08:28:51.016580] nvme_ctrlr.c:1072:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:32:17.823 [2024-04-17 08:28:51.016587] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:17.823 [2024-04-17 08:28:51.016590] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:17.823 [2024-04-17 08:28:51.016593] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c9f270) 00:32:17.823 [2024-04-17 08:28:51.016598] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.823 [2024-04-17 08:28:51.016609] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cdeaf0, cid 3, qid 0 00:32:17.823 [2024-04-17 08:28:51.016652] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:17.823 [2024-04-17 08:28:51.016657] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:17.823 [2024-04-17 08:28:51.016659] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:17.823 [2024-04-17 08:28:51.016662] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cdeaf0) on tqpair=0x1c9f270 00:32:17.823 [2024-04-17 08:28:51.016670] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:17.823 [2024-04-17 08:28:51.016673] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:17.823 [2024-04-17 08:28:51.016675] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c9f270) 00:32:17.823 [2024-04-17 08:28:51.016681] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.823 [2024-04-17 08:28:51.016692] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cdeaf0, cid 3, qid 0 00:32:17.823 [2024-04-17 08:28:51.016743] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:17.823 [2024-04-17 08:28:51.016748] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:17.823 [2024-04-17 08:28:51.016750] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:17.823 [2024-04-17 08:28:51.016753] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cdeaf0) on tqpair=0x1c9f270 00:32:17.823 [2024-04-17 08:28:51.016761] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:17.823 [2024-04-17 08:28:51.016764] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:17.823 [2024-04-17 08:28:51.016766] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c9f270) 00:32:17.823 [2024-04-17 08:28:51.016772] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.823 [2024-04-17 08:28:51.016783] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cdeaf0, cid 3, qid 0 00:32:17.823 [2024-04-17 08:28:51.016830] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:17.823 [2024-04-17 08:28:51.016835] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:17.823 [2024-04-17 08:28:51.016837] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:17.823 [2024-04-17 08:28:51.016840] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cdeaf0) on tqpair=0x1c9f270 00:32:17.823 [2024-04-17 08:28:51.016848] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:17.823 [2024-04-17 08:28:51.016851] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:17.823 [2024-04-17 08:28:51.016853] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c9f270) 00:32:17.823 [2024-04-17 08:28:51.016858] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.823 [2024-04-17 08:28:51.016869] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cdeaf0, cid 3, qid 0 00:32:17.823 [2024-04-17 08:28:51.016912] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:17.823 [2024-04-17 08:28:51.016917] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:17.823 [2024-04-17 08:28:51.016920] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:17.823 [2024-04-17 08:28:51.016922] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cdeaf0) on tqpair=0x1c9f270 00:32:17.823 [2024-04-17 08:28:51.016929] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:17.823 [2024-04-17 08:28:51.016932] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:17.823 [2024-04-17 08:28:51.016935] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c9f270) 00:32:17.823 [2024-04-17 08:28:51.016940] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.823 [2024-04-17 08:28:51.016951] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cdeaf0, cid 3, qid 0 00:32:17.823 [2024-04-17 08:28:51.016994] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:17.823 [2024-04-17 08:28:51.016999] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:17.823 [2024-04-17 08:28:51.017001] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:17.823 [2024-04-17 08:28:51.017004] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cdeaf0) on tqpair=0x1c9f270 00:32:17.823 [2024-04-17 08:28:51.017011] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:17.823 [2024-04-17 08:28:51.017014] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:17.823 [2024-04-17 08:28:51.017017] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c9f270) 00:32:17.823 [2024-04-17 08:28:51.017022] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.823 [2024-04-17 08:28:51.017032] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cdeaf0, cid 3, qid 0 00:32:17.823 [2024-04-17 08:28:51.017075] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:17.823 [2024-04-17 08:28:51.017080] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:17.823 [2024-04-17 08:28:51.017082] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:17.824 [2024-04-17 08:28:51.017085] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cdeaf0) on tqpair=0x1c9f270 00:32:17.824 [2024-04-17 08:28:51.017093] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:17.824 [2024-04-17 08:28:51.017095] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:17.824 [2024-04-17 08:28:51.017098] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c9f270) 00:32:17.824 [2024-04-17 08:28:51.017103] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.824 [2024-04-17 08:28:51.017114] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cdeaf0, cid 3, qid 0 00:32:17.824 [2024-04-17 08:28:51.017159] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:17.824 [2024-04-17 08:28:51.017164] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:17.824 [2024-04-17 08:28:51.017166] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:17.824 [2024-04-17 08:28:51.017169] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cdeaf0) on tqpair=0x1c9f270 00:32:17.824 [2024-04-17 08:28:51.017176] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:17.824 [2024-04-17 08:28:51.017179] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:17.824 [2024-04-17 08:28:51.017182] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c9f270) 00:32:17.824 [2024-04-17 08:28:51.017187] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.824 [2024-04-17 08:28:51.017198] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cdeaf0, cid 3, qid 0 00:32:17.824 [2024-04-17 08:28:51.017246] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:17.824 [2024-04-17 08:28:51.017250] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:17.824 [2024-04-17 08:28:51.017253] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:17.824 [2024-04-17 08:28:51.017256] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cdeaf0) on tqpair=0x1c9f270 00:32:17.824 [2024-04-17 08:28:51.017263] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:17.824 [2024-04-17 08:28:51.017266] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:17.824 [2024-04-17 08:28:51.017269] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c9f270) 00:32:17.824 [2024-04-17 08:28:51.017274] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.824 [2024-04-17 08:28:51.017285] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cdeaf0, cid 3, qid 0 00:32:17.824 [2024-04-17 08:28:51.021328] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:17.824 [2024-04-17 08:28:51.021344] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:17.824 [2024-04-17 08:28:51.021347] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:17.824 [2024-04-17 08:28:51.021350] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cdeaf0) on tqpair=0x1c9f270 00:32:17.824 [2024-04-17 08:28:51.021358] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:17.824 [2024-04-17 08:28:51.021361] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:17.824 [2024-04-17 08:28:51.021364] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c9f270) 00:32:17.824 [2024-04-17 08:28:51.021369] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.824 [2024-04-17 08:28:51.021384] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cdeaf0, cid 3, qid 0 00:32:17.824 [2024-04-17 08:28:51.021433] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:17.824 [2024-04-17 08:28:51.021438] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:17.824 [2024-04-17 08:28:51.021441] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:17.824 [2024-04-17 08:28:51.021443] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cdeaf0) on tqpair=0x1c9f270 00:32:17.824 [2024-04-17 08:28:51.021449] nvme_ctrlr.c:1191:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 4 milliseconds 00:32:17.824 0 Kelvin (-273 Celsius) 00:32:17.824 Available Spare: 0% 00:32:17.824 Available Spare Threshold: 0% 00:32:17.824 Life Percentage Used: 0% 00:32:17.824 Data Units Read: 0 00:32:17.824 Data Units Written: 0 00:32:17.824 Host Read Commands: 0 00:32:17.824 Host Write Commands: 0 00:32:17.824 Controller Busy Time: 0 minutes 00:32:17.824 Power Cycles: 0 00:32:17.824 Power On Hours: 0 hours 00:32:17.824 Unsafe Shutdowns: 0 00:32:17.824 Unrecoverable Media Errors: 0 00:32:17.824 Lifetime Error Log Entries: 0 00:32:17.824 Warning Temperature Time: 0 minutes 00:32:17.824 Critical Temperature Time: 0 minutes 00:32:17.824 00:32:17.824 Number of Queues 00:32:17.824 ================ 00:32:17.824 Number of I/O Submission Queues: 127 00:32:17.824 Number of I/O Completion Queues: 127 00:32:17.824 00:32:17.824 Active Namespaces 00:32:17.824 ================= 00:32:17.824 Namespace ID:1 00:32:17.824 Error Recovery Timeout: Unlimited 00:32:17.824 Command Set Identifier: NVM (00h) 00:32:17.824 Deallocate: Supported 00:32:17.824 Deallocated/Unwritten Error: Not Supported 00:32:17.824 Deallocated Read Value: Unknown 00:32:17.824 Deallocate in Write Zeroes: Not Supported 00:32:17.824 Deallocated Guard Field: 0xFFFF 00:32:17.824 Flush: Supported 00:32:17.824 Reservation: Supported 00:32:17.824 Namespace Sharing Capabilities: Multiple Controllers 00:32:17.824 Size (in LBAs): 131072 (0GiB) 00:32:17.824 Capacity (in LBAs): 131072 (0GiB) 00:32:17.824 Utilization (in LBAs): 131072 (0GiB) 00:32:17.824 NGUID: ABCDEF0123456789ABCDEF0123456789 00:32:17.824 EUI64: ABCDEF0123456789 00:32:17.824 UUID: 0cd123af-d1bb-455c-b9e0-9417ea491a87 00:32:17.824 Thin Provisioning: Not Supported 00:32:17.824 Per-NS Atomic Units: Yes 00:32:17.824 Atomic Boundary Size (Normal): 0 00:32:17.824 Atomic Boundary Size (PFail): 0 00:32:17.824 Atomic Boundary Offset: 0 00:32:17.824 Maximum Single Source Range Length: 65535 00:32:17.824 Maximum Copy Length: 65535 00:32:17.824 Maximum Source Range Count: 1 00:32:17.824 NGUID/EUI64 Never Reused: No 00:32:17.824 Namespace Write Protected: No 00:32:17.824 Number of LBA Formats: 1 00:32:17.824 Current LBA Format: LBA Format #00 00:32:17.824 LBA Format #00: Data Size: 512 Metadata Size: 0 00:32:17.824 00:32:17.824 08:28:51 -- host/identify.sh@51 -- # sync 00:32:17.824 08:28:51 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:17.824 08:28:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:17.824 08:28:51 -- common/autotest_common.sh@10 -- # set +x 00:32:17.824 08:28:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:17.824 08:28:51 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:32:17.824 08:28:51 -- host/identify.sh@56 -- # nvmftestfini 00:32:17.824 08:28:51 -- nvmf/common.sh@476 -- # nvmfcleanup 00:32:17.824 08:28:51 -- nvmf/common.sh@116 -- # sync 00:32:17.824 08:28:51 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:32:17.824 08:28:51 -- nvmf/common.sh@119 -- # set +e 00:32:17.824 08:28:51 -- nvmf/common.sh@120 -- # for i in {1..20} 00:32:17.824 08:28:51 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:32:17.824 rmmod nvme_tcp 00:32:17.824 rmmod nvme_fabrics 00:32:17.824 rmmod nvme_keyring 00:32:18.083 08:28:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:32:18.083 08:28:51 -- nvmf/common.sh@123 -- # set -e 00:32:18.083 08:28:51 -- nvmf/common.sh@124 -- # return 0 00:32:18.083 08:28:51 -- nvmf/common.sh@477 -- # '[' -n 68514 ']' 00:32:18.083 08:28:51 -- nvmf/common.sh@478 -- # killprocess 68514 00:32:18.083 08:28:51 -- common/autotest_common.sh@926 -- # '[' -z 68514 ']' 00:32:18.083 08:28:51 -- common/autotest_common.sh@930 -- # kill -0 68514 00:32:18.083 08:28:51 -- common/autotest_common.sh@931 -- # uname 00:32:18.083 08:28:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:18.083 08:28:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 68514 00:32:18.083 killing process with pid 68514 00:32:18.083 08:28:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:32:18.083 08:28:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:32:18.083 08:28:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 68514' 00:32:18.083 08:28:51 -- common/autotest_common.sh@945 -- # kill 68514 00:32:18.083 [2024-04-17 08:28:51.186677] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:32:18.083 08:28:51 -- common/autotest_common.sh@950 -- # wait 68514 00:32:18.356 08:28:51 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:32:18.356 08:28:51 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:32:18.356 08:28:51 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:32:18.356 08:28:51 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:18.356 08:28:51 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:32:18.356 08:28:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:18.356 08:28:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:18.356 08:28:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:18.356 08:28:51 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:32:18.356 ************************************ 00:32:18.356 END TEST nvmf_identify 00:32:18.356 ************************************ 00:32:18.356 00:32:18.356 real 0m2.338s 00:32:18.356 user 0m6.227s 00:32:18.356 sys 0m0.624s 00:32:18.356 08:28:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:18.356 08:28:51 -- common/autotest_common.sh@10 -- # set +x 00:32:18.356 08:28:51 -- nvmf/nvmf.sh@97 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:32:18.356 08:28:51 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:32:18.356 08:28:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:18.356 08:28:51 -- common/autotest_common.sh@10 -- # set +x 00:32:18.356 ************************************ 00:32:18.356 START TEST nvmf_perf 00:32:18.356 ************************************ 00:32:18.356 08:28:51 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:32:18.356 * Looking for test storage... 00:32:18.356 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:32:18.356 08:28:51 -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:32:18.356 08:28:51 -- nvmf/common.sh@7 -- # uname -s 00:32:18.356 08:28:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:18.356 08:28:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:18.356 08:28:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:18.356 08:28:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:18.356 08:28:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:18.356 08:28:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:18.356 08:28:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:18.356 08:28:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:18.356 08:28:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:18.356 08:28:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:18.356 08:28:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ce38300-f67f-48af-81f9-d51a7c54746d 00:32:18.356 08:28:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ce38300-f67f-48af-81f9-d51a7c54746d 00:32:18.356 08:28:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:18.356 08:28:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:18.356 08:28:51 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:32:18.356 08:28:51 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:18.356 08:28:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:18.356 08:28:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:18.356 08:28:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:18.356 08:28:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:18.356 08:28:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:18.356 08:28:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:18.356 08:28:51 -- paths/export.sh@5 -- # export PATH 00:32:18.356 08:28:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:18.618 08:28:51 -- nvmf/common.sh@46 -- # : 0 00:32:18.618 08:28:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:32:18.618 08:28:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:32:18.618 08:28:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:32:18.618 08:28:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:18.618 08:28:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:18.618 08:28:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:32:18.618 08:28:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:32:18.618 08:28:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:32:18.618 08:28:51 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:32:18.618 08:28:51 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:32:18.618 08:28:51 -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:18.619 08:28:51 -- host/perf.sh@17 -- # nvmftestinit 00:32:18.619 08:28:51 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:32:18.619 08:28:51 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:18.619 08:28:51 -- nvmf/common.sh@436 -- # prepare_net_devs 00:32:18.619 08:28:51 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:32:18.619 08:28:51 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:32:18.619 08:28:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:18.619 08:28:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:18.619 08:28:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:18.619 08:28:51 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:32:18.619 08:28:51 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:32:18.619 08:28:51 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:32:18.619 08:28:51 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:32:18.619 08:28:51 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:32:18.619 08:28:51 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:32:18.619 08:28:51 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:18.619 08:28:51 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:18.619 08:28:51 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:32:18.619 08:28:51 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:32:18.619 08:28:51 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:32:18.619 08:28:51 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:32:18.619 08:28:51 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:32:18.619 08:28:51 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:18.619 08:28:51 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:32:18.619 08:28:51 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:32:18.619 08:28:51 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:32:18.619 08:28:51 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:32:18.619 08:28:51 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:32:18.619 08:28:51 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:32:18.619 Cannot find device "nvmf_tgt_br" 00:32:18.619 08:28:51 -- nvmf/common.sh@154 -- # true 00:32:18.619 08:28:51 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:32:18.619 Cannot find device "nvmf_tgt_br2" 00:32:18.619 08:28:51 -- nvmf/common.sh@155 -- # true 00:32:18.619 08:28:51 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:32:18.619 08:28:51 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:32:18.619 Cannot find device "nvmf_tgt_br" 00:32:18.619 08:28:51 -- nvmf/common.sh@157 -- # true 00:32:18.619 08:28:51 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:32:18.619 Cannot find device "nvmf_tgt_br2" 00:32:18.619 08:28:51 -- nvmf/common.sh@158 -- # true 00:32:18.619 08:28:51 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:32:18.619 08:28:51 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:32:18.619 08:28:51 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:32:18.619 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:18.619 08:28:51 -- nvmf/common.sh@161 -- # true 00:32:18.619 08:28:51 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:32:18.619 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:18.619 08:28:51 -- nvmf/common.sh@162 -- # true 00:32:18.619 08:28:51 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:32:18.619 08:28:51 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:32:18.619 08:28:51 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:32:18.619 08:28:51 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:32:18.619 08:28:51 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:32:18.619 08:28:51 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:32:18.619 08:28:51 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:32:18.619 08:28:51 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:32:18.619 08:28:51 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:32:18.619 08:28:51 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:32:18.619 08:28:51 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:32:18.619 08:28:51 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:32:18.619 08:28:51 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:32:18.619 08:28:51 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:32:18.879 08:28:51 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:32:18.879 08:28:51 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:32:18.879 08:28:51 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:32:18.879 08:28:51 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:32:18.879 08:28:51 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:32:18.879 08:28:51 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:32:18.879 08:28:52 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:32:18.879 08:28:52 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:32:18.879 08:28:52 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:32:18.879 08:28:52 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:32:18.879 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:18.879 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:32:18.879 00:32:18.879 --- 10.0.0.2 ping statistics --- 00:32:18.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:18.879 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:32:18.879 08:28:52 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:32:18.879 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:32:18.879 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:32:18.879 00:32:18.879 --- 10.0.0.3 ping statistics --- 00:32:18.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:18.879 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:32:18.879 08:28:52 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:32:18.879 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:18.879 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:32:18.879 00:32:18.879 --- 10.0.0.1 ping statistics --- 00:32:18.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:18.879 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:32:18.879 08:28:52 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:18.879 08:28:52 -- nvmf/common.sh@421 -- # return 0 00:32:18.879 08:28:52 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:32:18.879 08:28:52 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:18.879 08:28:52 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:32:18.879 08:28:52 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:32:18.879 08:28:52 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:18.879 08:28:52 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:32:18.879 08:28:52 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:32:18.879 08:28:52 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:32:18.879 08:28:52 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:32:18.879 08:28:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:18.879 08:28:52 -- common/autotest_common.sh@10 -- # set +x 00:32:18.879 08:28:52 -- nvmf/common.sh@469 -- # nvmfpid=68722 00:32:18.879 08:28:52 -- nvmf/common.sh@470 -- # waitforlisten 68722 00:32:18.879 08:28:52 -- common/autotest_common.sh@819 -- # '[' -z 68722 ']' 00:32:18.879 08:28:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:18.879 08:28:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:18.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:18.879 08:28:52 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:32:18.879 08:28:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:18.879 08:28:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:18.879 08:28:52 -- common/autotest_common.sh@10 -- # set +x 00:32:18.879 [2024-04-17 08:28:52.128640] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:32:18.879 [2024-04-17 08:28:52.128717] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:19.138 [2024-04-17 08:28:52.257141] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:19.138 [2024-04-17 08:28:52.368655] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:19.138 [2024-04-17 08:28:52.368807] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:19.138 [2024-04-17 08:28:52.368815] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:19.138 [2024-04-17 08:28:52.368821] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:19.138 [2024-04-17 08:28:52.368958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:19.138 [2024-04-17 08:28:52.369201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:19.138 [2024-04-17 08:28:52.369428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:32:19.138 [2024-04-17 08:28:52.369433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:19.704 08:28:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:19.704 08:28:52 -- common/autotest_common.sh@852 -- # return 0 00:32:19.704 08:28:52 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:32:19.704 08:28:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:19.704 08:28:52 -- common/autotest_common.sh@10 -- # set +x 00:32:19.704 08:28:53 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:19.963 08:28:53 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:32:19.963 08:28:53 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:32:20.221 08:28:53 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:32:20.221 08:28:53 -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:32:20.480 08:28:53 -- host/perf.sh@30 -- # local_nvme_trid=0000:00:06.0 00:32:20.480 08:28:53 -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:20.739 08:28:53 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:32:20.739 08:28:53 -- host/perf.sh@33 -- # '[' -n 0000:00:06.0 ']' 00:32:20.739 08:28:53 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:32:20.739 08:28:53 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:32:20.739 08:28:53 -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:32:21.000 [2024-04-17 08:28:54.076217] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:21.000 08:28:54 -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:21.000 08:28:54 -- host/perf.sh@45 -- # for bdev in $bdevs 00:32:21.000 08:28:54 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:21.259 08:28:54 -- host/perf.sh@45 -- # for bdev in $bdevs 00:32:21.259 08:28:54 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:32:21.519 08:28:54 -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:21.777 [2024-04-17 08:28:54.947886] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:21.777 08:28:54 -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:22.035 08:28:55 -- host/perf.sh@52 -- # '[' -n 0000:00:06.0 ']' 00:32:22.035 08:28:55 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:32:22.035 08:28:55 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:32:22.035 08:28:55 -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:32:23.420 Initializing NVMe Controllers 00:32:23.420 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:32:23.420 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:32:23.420 Initialization complete. Launching workers. 00:32:23.420 ======================================================== 00:32:23.420 Latency(us) 00:32:23.420 Device Information : IOPS MiB/s Average min max 00:32:23.420 PCIE (0000:00:06.0) NSID 1 from core 0: 28098.21 109.76 1138.90 313.01 6916.43 00:32:23.420 ======================================================== 00:32:23.420 Total : 28098.21 109.76 1138.90 313.01 6916.43 00:32:23.420 00:32:23.420 08:28:56 -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:24.803 Initializing NVMe Controllers 00:32:24.803 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:24.803 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:24.803 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:32:24.803 Initialization complete. Launching workers. 00:32:24.803 ======================================================== 00:32:24.803 Latency(us) 00:32:24.803 Device Information : IOPS MiB/s Average min max 00:32:24.803 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3568.95 13.94 279.94 88.42 6148.75 00:32:24.803 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 123.76 0.48 8143.87 6045.21 12050.30 00:32:24.803 ======================================================== 00:32:24.803 Total : 3692.71 14.42 543.49 88.42 12050.30 00:32:24.803 00:32:24.803 08:28:57 -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:26.185 Initializing NVMe Controllers 00:32:26.185 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:26.185 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:26.185 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:32:26.185 Initialization complete. Launching workers. 00:32:26.185 ======================================================== 00:32:26.185 Latency(us) 00:32:26.185 Device Information : IOPS MiB/s Average min max 00:32:26.185 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8673.48 33.88 3689.23 481.05 7590.87 00:32:26.185 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4022.48 15.71 8013.94 5100.27 10548.74 00:32:26.185 ======================================================== 00:32:26.185 Total : 12695.96 49.59 5059.43 481.05 10548.74 00:32:26.185 00:32:26.185 08:28:59 -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:32:26.185 08:28:59 -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:28.735 Initializing NVMe Controllers 00:32:28.735 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:28.735 Controller IO queue size 128, less than required. 00:32:28.735 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:28.735 Controller IO queue size 128, less than required. 00:32:28.735 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:28.735 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:28.735 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:32:28.735 Initialization complete. Launching workers. 00:32:28.735 ======================================================== 00:32:28.735 Latency(us) 00:32:28.735 Device Information : IOPS MiB/s Average min max 00:32:28.735 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1807.83 451.96 71818.75 29905.61 122143.33 00:32:28.735 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 679.44 169.86 197255.55 93386.29 314436.84 00:32:28.735 ======================================================== 00:32:28.735 Total : 2487.27 621.82 106083.80 29905.61 314436.84 00:32:28.735 00:32:28.735 08:29:01 -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:32:28.735 No valid NVMe controllers or AIO or URING devices found 00:32:28.735 Initializing NVMe Controllers 00:32:28.735 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:28.735 Controller IO queue size 128, less than required. 00:32:28.735 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:28.735 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:32:28.735 Controller IO queue size 128, less than required. 00:32:28.735 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:28.735 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:32:28.735 WARNING: Some requested NVMe devices were skipped 00:32:28.735 08:29:01 -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:32:31.267 Initializing NVMe Controllers 00:32:31.267 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:31.267 Controller IO queue size 128, less than required. 00:32:31.267 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:31.267 Controller IO queue size 128, less than required. 00:32:31.267 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:31.267 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:31.267 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:32:31.267 Initialization complete. Launching workers. 00:32:31.267 00:32:31.267 ==================== 00:32:31.267 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:32:31.267 TCP transport: 00:32:31.267 polls: 11674 00:32:31.267 idle_polls: 0 00:32:31.267 sock_completions: 11674 00:32:31.267 nvme_completions: 7493 00:32:31.267 submitted_requests: 11463 00:32:31.267 queued_requests: 1 00:32:31.267 00:32:31.267 ==================== 00:32:31.267 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:32:31.267 TCP transport: 00:32:31.267 polls: 12543 00:32:31.267 idle_polls: 0 00:32:31.267 sock_completions: 12543 00:32:31.268 nvme_completions: 6365 00:32:31.268 submitted_requests: 9771 00:32:31.268 queued_requests: 1 00:32:31.268 ======================================================== 00:32:31.268 Latency(us) 00:32:31.268 Device Information : IOPS MiB/s Average min max 00:32:31.268 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1936.99 484.25 67779.09 41512.13 117017.14 00:32:31.268 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1653.99 413.50 77661.38 34860.74 174258.68 00:32:31.268 ======================================================== 00:32:31.268 Total : 3590.98 897.75 72330.83 34860.74 174258.68 00:32:31.268 00:32:31.268 08:29:04 -- host/perf.sh@66 -- # sync 00:32:31.268 08:29:04 -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:31.526 08:29:04 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:32:31.526 08:29:04 -- host/perf.sh@71 -- # '[' -n 0000:00:06.0 ']' 00:32:31.526 08:29:04 -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:32:31.786 08:29:04 -- host/perf.sh@72 -- # ls_guid=c241a467-e189-422b-b508-df02ccfe608b 00:32:31.786 08:29:04 -- host/perf.sh@73 -- # get_lvs_free_mb c241a467-e189-422b-b508-df02ccfe608b 00:32:31.786 08:29:04 -- common/autotest_common.sh@1343 -- # local lvs_uuid=c241a467-e189-422b-b508-df02ccfe608b 00:32:31.786 08:29:04 -- common/autotest_common.sh@1344 -- # local lvs_info 00:32:31.786 08:29:04 -- common/autotest_common.sh@1345 -- # local fc 00:32:31.786 08:29:04 -- common/autotest_common.sh@1346 -- # local cs 00:32:31.786 08:29:04 -- common/autotest_common.sh@1347 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:31.786 08:29:05 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:32:31.786 { 00:32:31.786 "uuid": "c241a467-e189-422b-b508-df02ccfe608b", 00:32:31.786 "name": "lvs_0", 00:32:31.786 "base_bdev": "Nvme0n1", 00:32:31.786 "total_data_clusters": 1278, 00:32:31.786 "free_clusters": 1278, 00:32:31.786 "block_size": 4096, 00:32:31.786 "cluster_size": 4194304 00:32:31.786 } 00:32:31.786 ]' 00:32:31.786 08:29:05 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="c241a467-e189-422b-b508-df02ccfe608b") .free_clusters' 00:32:32.045 08:29:05 -- common/autotest_common.sh@1348 -- # fc=1278 00:32:32.045 08:29:05 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="c241a467-e189-422b-b508-df02ccfe608b") .cluster_size' 00:32:32.045 5112 00:32:32.045 08:29:05 -- common/autotest_common.sh@1349 -- # cs=4194304 00:32:32.045 08:29:05 -- common/autotest_common.sh@1352 -- # free_mb=5112 00:32:32.045 08:29:05 -- common/autotest_common.sh@1353 -- # echo 5112 00:32:32.045 08:29:05 -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:32:32.045 08:29:05 -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u c241a467-e189-422b-b508-df02ccfe608b lbd_0 5112 00:32:32.305 08:29:05 -- host/perf.sh@80 -- # lb_guid=83866000-2237-4261-b3e0-c519b32742bf 00:32:32.305 08:29:05 -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore 83866000-2237-4261-b3e0-c519b32742bf lvs_n_0 00:32:32.564 08:29:05 -- host/perf.sh@83 -- # ls_nested_guid=29184661-432a-49bb-874b-53567f8ef33d 00:32:32.564 08:29:05 -- host/perf.sh@84 -- # get_lvs_free_mb 29184661-432a-49bb-874b-53567f8ef33d 00:32:32.564 08:29:05 -- common/autotest_common.sh@1343 -- # local lvs_uuid=29184661-432a-49bb-874b-53567f8ef33d 00:32:32.564 08:29:05 -- common/autotest_common.sh@1344 -- # local lvs_info 00:32:32.564 08:29:05 -- common/autotest_common.sh@1345 -- # local fc 00:32:32.564 08:29:05 -- common/autotest_common.sh@1346 -- # local cs 00:32:32.564 08:29:05 -- common/autotest_common.sh@1347 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:32.822 08:29:05 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:32:32.822 { 00:32:32.822 "uuid": "c241a467-e189-422b-b508-df02ccfe608b", 00:32:32.822 "name": "lvs_0", 00:32:32.822 "base_bdev": "Nvme0n1", 00:32:32.822 "total_data_clusters": 1278, 00:32:32.822 "free_clusters": 0, 00:32:32.822 "block_size": 4096, 00:32:32.822 "cluster_size": 4194304 00:32:32.822 }, 00:32:32.823 { 00:32:32.823 "uuid": "29184661-432a-49bb-874b-53567f8ef33d", 00:32:32.823 "name": "lvs_n_0", 00:32:32.823 "base_bdev": "83866000-2237-4261-b3e0-c519b32742bf", 00:32:32.823 "total_data_clusters": 1276, 00:32:32.823 "free_clusters": 1276, 00:32:32.823 "block_size": 4096, 00:32:32.823 "cluster_size": 4194304 00:32:32.823 } 00:32:32.823 ]' 00:32:32.823 08:29:05 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="29184661-432a-49bb-874b-53567f8ef33d") .free_clusters' 00:32:32.823 08:29:05 -- common/autotest_common.sh@1348 -- # fc=1276 00:32:32.823 08:29:05 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="29184661-432a-49bb-874b-53567f8ef33d") .cluster_size' 00:32:32.823 08:29:06 -- common/autotest_common.sh@1349 -- # cs=4194304 00:32:32.823 08:29:06 -- common/autotest_common.sh@1352 -- # free_mb=5104 00:32:32.823 08:29:06 -- common/autotest_common.sh@1353 -- # echo 5104 00:32:32.823 5104 00:32:32.823 08:29:06 -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:32:32.823 08:29:06 -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 29184661-432a-49bb-874b-53567f8ef33d lbd_nest_0 5104 00:32:33.082 08:29:06 -- host/perf.sh@88 -- # lb_nested_guid=4dca31e3-dd74-4fa3-a780-186e91403617 00:32:33.082 08:29:06 -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:33.341 08:29:06 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:32:33.341 08:29:06 -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 4dca31e3-dd74-4fa3-a780-186e91403617 00:32:33.601 08:29:06 -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:33.601 08:29:06 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:32:33.601 08:29:06 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:32:33.601 08:29:06 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:32:33.601 08:29:06 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:33.601 08:29:06 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:33.860 No valid NVMe controllers or AIO or URING devices found 00:32:34.119 Initializing NVMe Controllers 00:32:34.119 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:34.119 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:32:34.119 WARNING: Some requested NVMe devices were skipped 00:32:34.119 08:29:07 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:34.119 08:29:07 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:46.331 Initializing NVMe Controllers 00:32:46.331 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:46.331 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:46.331 Initialization complete. Launching workers. 00:32:46.331 ======================================================== 00:32:46.331 Latency(us) 00:32:46.331 Device Information : IOPS MiB/s Average min max 00:32:46.331 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1150.10 143.76 869.21 256.46 7593.93 00:32:46.331 ======================================================== 00:32:46.331 Total : 1150.10 143.76 869.21 256.46 7593.93 00:32:46.331 00:32:46.331 08:29:17 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:32:46.331 08:29:17 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:46.331 08:29:17 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:46.331 No valid NVMe controllers or AIO or URING devices found 00:32:46.331 Initializing NVMe Controllers 00:32:46.331 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:46.331 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:32:46.331 WARNING: Some requested NVMe devices were skipped 00:32:46.331 08:29:17 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:46.331 08:29:17 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:56.352 Initializing NVMe Controllers 00:32:56.352 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:56.352 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:56.352 Initialization complete. Launching workers. 00:32:56.352 ======================================================== 00:32:56.352 Latency(us) 00:32:56.352 Device Information : IOPS MiB/s Average min max 00:32:56.352 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1269.40 158.67 25258.75 7949.97 67662.07 00:32:56.352 ======================================================== 00:32:56.352 Total : 1269.40 158.67 25258.75 7949.97 67662.07 00:32:56.352 00:32:56.352 08:29:28 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:32:56.352 08:29:28 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:56.352 08:29:28 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:56.352 No valid NVMe controllers or AIO or URING devices found 00:32:56.352 Initializing NVMe Controllers 00:32:56.352 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:56.352 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:32:56.352 WARNING: Some requested NVMe devices were skipped 00:32:56.352 08:29:28 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:56.352 08:29:28 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:06.335 Initializing NVMe Controllers 00:33:06.335 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:06.335 Controller IO queue size 128, less than required. 00:33:06.335 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:06.335 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:06.335 Initialization complete. Launching workers. 00:33:06.335 ======================================================== 00:33:06.335 Latency(us) 00:33:06.335 Device Information : IOPS MiB/s Average min max 00:33:06.335 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4425.28 553.16 28956.97 6280.89 117584.72 00:33:06.335 ======================================================== 00:33:06.335 Total : 4425.28 553.16 28956.97 6280.89 117584.72 00:33:06.335 00:33:06.335 08:29:38 -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:06.335 08:29:38 -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 4dca31e3-dd74-4fa3-a780-186e91403617 00:33:06.335 08:29:39 -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:33:06.335 08:29:39 -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 83866000-2237-4261-b3e0-c519b32742bf 00:33:06.595 08:29:39 -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:33:06.855 08:29:39 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:33:06.855 08:29:39 -- host/perf.sh@114 -- # nvmftestfini 00:33:06.855 08:29:39 -- nvmf/common.sh@476 -- # nvmfcleanup 00:33:06.855 08:29:39 -- nvmf/common.sh@116 -- # sync 00:33:06.855 08:29:39 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:33:06.855 08:29:39 -- nvmf/common.sh@119 -- # set +e 00:33:06.855 08:29:39 -- nvmf/common.sh@120 -- # for i in {1..20} 00:33:06.855 08:29:39 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:33:06.855 rmmod nvme_tcp 00:33:06.855 rmmod nvme_fabrics 00:33:06.855 rmmod nvme_keyring 00:33:06.855 08:29:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:33:06.855 08:29:40 -- nvmf/common.sh@123 -- # set -e 00:33:06.855 08:29:40 -- nvmf/common.sh@124 -- # return 0 00:33:06.855 08:29:40 -- nvmf/common.sh@477 -- # '[' -n 68722 ']' 00:33:06.855 08:29:40 -- nvmf/common.sh@478 -- # killprocess 68722 00:33:06.855 08:29:40 -- common/autotest_common.sh@926 -- # '[' -z 68722 ']' 00:33:06.855 08:29:40 -- common/autotest_common.sh@930 -- # kill -0 68722 00:33:06.855 08:29:40 -- common/autotest_common.sh@931 -- # uname 00:33:06.855 08:29:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:06.855 08:29:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 68722 00:33:06.855 killing process with pid 68722 00:33:06.855 08:29:40 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:33:06.855 08:29:40 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:33:06.855 08:29:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 68722' 00:33:06.855 08:29:40 -- common/autotest_common.sh@945 -- # kill 68722 00:33:06.855 08:29:40 -- common/autotest_common.sh@950 -- # wait 68722 00:33:08.761 08:29:41 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:33:08.761 08:29:41 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:33:08.761 08:29:41 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:33:08.761 08:29:41 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:08.761 08:29:41 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:33:08.761 08:29:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:08.761 08:29:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:08.761 08:29:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:08.761 08:29:42 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:33:08.761 ************************************ 00:33:08.761 END TEST nvmf_perf 00:33:08.761 ************************************ 00:33:08.761 00:33:08.761 real 0m50.469s 00:33:08.761 user 3m10.657s 00:33:08.761 sys 0m11.757s 00:33:08.761 08:29:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:08.761 08:29:42 -- common/autotest_common.sh@10 -- # set +x 00:33:08.761 08:29:42 -- nvmf/nvmf.sh@98 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:33:08.761 08:29:42 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:33:08.761 08:29:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:08.761 08:29:42 -- common/autotest_common.sh@10 -- # set +x 00:33:08.761 ************************************ 00:33:08.761 START TEST nvmf_fio_host 00:33:08.761 ************************************ 00:33:08.761 08:29:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:33:09.054 * Looking for test storage... 00:33:09.054 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:33:09.054 08:29:42 -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:09.054 08:29:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:09.054 08:29:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:09.054 08:29:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:09.054 08:29:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.054 08:29:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.054 08:29:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.054 08:29:42 -- paths/export.sh@5 -- # export PATH 00:33:09.054 08:29:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.054 08:29:42 -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:33:09.054 08:29:42 -- nvmf/common.sh@7 -- # uname -s 00:33:09.054 08:29:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:09.054 08:29:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:09.054 08:29:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:09.054 08:29:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:09.054 08:29:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:09.054 08:29:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:09.054 08:29:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:09.054 08:29:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:09.054 08:29:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:09.054 08:29:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:09.054 08:29:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ce38300-f67f-48af-81f9-d51a7c54746d 00:33:09.054 08:29:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ce38300-f67f-48af-81f9-d51a7c54746d 00:33:09.055 08:29:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:09.055 08:29:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:09.055 08:29:42 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:33:09.055 08:29:42 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:09.055 08:29:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:09.055 08:29:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:09.055 08:29:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:09.055 08:29:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.055 08:29:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.055 08:29:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.055 08:29:42 -- paths/export.sh@5 -- # export PATH 00:33:09.055 08:29:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.055 08:29:42 -- nvmf/common.sh@46 -- # : 0 00:33:09.055 08:29:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:33:09.055 08:29:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:33:09.055 08:29:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:33:09.055 08:29:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:09.055 08:29:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:09.055 08:29:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:33:09.055 08:29:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:33:09.055 08:29:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:33:09.055 08:29:42 -- host/fio.sh@12 -- # nvmftestinit 00:33:09.055 08:29:42 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:33:09.055 08:29:42 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:09.055 08:29:42 -- nvmf/common.sh@436 -- # prepare_net_devs 00:33:09.055 08:29:42 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:33:09.055 08:29:42 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:33:09.055 08:29:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:09.055 08:29:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:09.055 08:29:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:09.055 08:29:42 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:33:09.055 08:29:42 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:33:09.055 08:29:42 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:33:09.055 08:29:42 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:33:09.055 08:29:42 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:33:09.055 08:29:42 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:33:09.055 08:29:42 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:09.055 08:29:42 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:09.055 08:29:42 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:33:09.055 08:29:42 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:33:09.055 08:29:42 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:33:09.055 08:29:42 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:33:09.055 08:29:42 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:33:09.055 08:29:42 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:09.055 08:29:42 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:33:09.055 08:29:42 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:33:09.055 08:29:42 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:33:09.055 08:29:42 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:33:09.055 08:29:42 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:33:09.055 08:29:42 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:33:09.055 Cannot find device "nvmf_tgt_br" 00:33:09.055 08:29:42 -- nvmf/common.sh@154 -- # true 00:33:09.055 08:29:42 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:33:09.055 Cannot find device "nvmf_tgt_br2" 00:33:09.055 08:29:42 -- nvmf/common.sh@155 -- # true 00:33:09.055 08:29:42 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:33:09.055 08:29:42 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:33:09.055 Cannot find device "nvmf_tgt_br" 00:33:09.055 08:29:42 -- nvmf/common.sh@157 -- # true 00:33:09.055 08:29:42 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:33:09.055 Cannot find device "nvmf_tgt_br2" 00:33:09.055 08:29:42 -- nvmf/common.sh@158 -- # true 00:33:09.055 08:29:42 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:33:09.055 08:29:42 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:33:09.315 08:29:42 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:33:09.315 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:09.315 08:29:42 -- nvmf/common.sh@161 -- # true 00:33:09.315 08:29:42 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:33:09.315 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:09.315 08:29:42 -- nvmf/common.sh@162 -- # true 00:33:09.315 08:29:42 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:33:09.315 08:29:42 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:33:09.315 08:29:42 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:33:09.315 08:29:42 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:33:09.315 08:29:42 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:33:09.315 08:29:42 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:33:09.315 08:29:42 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:33:09.315 08:29:42 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:33:09.315 08:29:42 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:33:09.315 08:29:42 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:33:09.315 08:29:42 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:33:09.315 08:29:42 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:33:09.315 08:29:42 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:33:09.315 08:29:42 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:33:09.315 08:29:42 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:33:09.315 08:29:42 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:33:09.315 08:29:42 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:33:09.315 08:29:42 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:33:09.315 08:29:42 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:33:09.315 08:29:42 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:33:09.315 08:29:42 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:33:09.315 08:29:42 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:33:09.315 08:29:42 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:33:09.315 08:29:42 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:33:09.315 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:09.315 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.127 ms 00:33:09.315 00:33:09.315 --- 10.0.0.2 ping statistics --- 00:33:09.315 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:09.315 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:33:09.315 08:29:42 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:33:09.315 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:33:09.315 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.078 ms 00:33:09.315 00:33:09.315 --- 10.0.0.3 ping statistics --- 00:33:09.315 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:09.315 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:33:09.315 08:29:42 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:33:09.315 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:09.315 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:33:09.315 00:33:09.315 --- 10.0.0.1 ping statistics --- 00:33:09.315 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:09.315 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:33:09.315 08:29:42 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:09.315 08:29:42 -- nvmf/common.sh@421 -- # return 0 00:33:09.315 08:29:42 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:33:09.315 08:29:42 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:09.315 08:29:42 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:33:09.315 08:29:42 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:33:09.315 08:29:42 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:09.315 08:29:42 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:33:09.315 08:29:42 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:33:09.315 08:29:42 -- host/fio.sh@14 -- # [[ y != y ]] 00:33:09.315 08:29:42 -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:33:09.315 08:29:42 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:09.315 08:29:42 -- common/autotest_common.sh@10 -- # set +x 00:33:09.315 08:29:42 -- host/fio.sh@22 -- # nvmfpid=69544 00:33:09.315 08:29:42 -- host/fio.sh@21 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:33:09.315 08:29:42 -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:09.315 08:29:42 -- host/fio.sh@26 -- # waitforlisten 69544 00:33:09.315 08:29:42 -- common/autotest_common.sh@819 -- # '[' -z 69544 ']' 00:33:09.315 08:29:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:09.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:09.315 08:29:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:09.315 08:29:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:09.315 08:29:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:09.315 08:29:42 -- common/autotest_common.sh@10 -- # set +x 00:33:09.574 [2024-04-17 08:29:42.694331] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:33:09.575 [2024-04-17 08:29:42.694402] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:09.575 [2024-04-17 08:29:42.835368] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:09.834 [2024-04-17 08:29:42.940731] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:33:09.834 [2024-04-17 08:29:42.940965] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:09.834 [2024-04-17 08:29:42.940990] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:09.834 [2024-04-17 08:29:42.941051] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:09.834 [2024-04-17 08:29:42.941205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:09.834 [2024-04-17 08:29:42.941399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:09.834 [2024-04-17 08:29:42.941477] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:09.834 [2024-04-17 08:29:42.941482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:33:10.402 08:29:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:10.402 08:29:43 -- common/autotest_common.sh@852 -- # return 0 00:33:10.402 08:29:43 -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:10.402 08:29:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:10.402 08:29:43 -- common/autotest_common.sh@10 -- # set +x 00:33:10.402 [2024-04-17 08:29:43.574792] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:10.402 08:29:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:10.402 08:29:43 -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:33:10.402 08:29:43 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:10.402 08:29:43 -- common/autotest_common.sh@10 -- # set +x 00:33:10.402 08:29:43 -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:33:10.402 08:29:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:10.402 08:29:43 -- common/autotest_common.sh@10 -- # set +x 00:33:10.402 Malloc1 00:33:10.402 08:29:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:10.402 08:29:43 -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:10.402 08:29:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:10.402 08:29:43 -- common/autotest_common.sh@10 -- # set +x 00:33:10.402 08:29:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:10.402 08:29:43 -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:33:10.402 08:29:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:10.402 08:29:43 -- common/autotest_common.sh@10 -- # set +x 00:33:10.402 08:29:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:10.402 08:29:43 -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:10.402 08:29:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:10.402 08:29:43 -- common/autotest_common.sh@10 -- # set +x 00:33:10.402 [2024-04-17 08:29:43.693393] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:10.402 08:29:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:10.402 08:29:43 -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:10.402 08:29:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:10.402 08:29:43 -- common/autotest_common.sh@10 -- # set +x 00:33:10.402 08:29:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:10.402 08:29:43 -- host/fio.sh@36 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:33:10.402 08:29:43 -- host/fio.sh@39 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:10.402 08:29:43 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:10.402 08:29:43 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:33:10.402 08:29:43 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:10.402 08:29:43 -- common/autotest_common.sh@1318 -- # local sanitizers 00:33:10.402 08:29:43 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:33:10.402 08:29:43 -- common/autotest_common.sh@1320 -- # shift 00:33:10.402 08:29:43 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:33:10.402 08:29:43 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:33:10.402 08:29:43 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:33:10.402 08:29:43 -- common/autotest_common.sh@1324 -- # grep libasan 00:33:10.402 08:29:43 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:33:10.662 08:29:43 -- common/autotest_common.sh@1324 -- # asan_lib= 00:33:10.662 08:29:43 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:33:10.662 08:29:43 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:33:10.662 08:29:43 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:33:10.662 08:29:43 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:33:10.662 08:29:43 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:33:10.662 08:29:43 -- common/autotest_common.sh@1324 -- # asan_lib= 00:33:10.662 08:29:43 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:33:10.662 08:29:43 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:33:10.662 08:29:43 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:10.662 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:33:10.662 fio-3.35 00:33:10.662 Starting 1 thread 00:33:13.198 00:33:13.198 test: (groupid=0, jobs=1): err= 0: pid=69599: Wed Apr 17 08:29:46 2024 00:33:13.198 read: IOPS=9981, BW=39.0MiB/s (40.9MB/s)(78.2MiB/2006msec) 00:33:13.198 slat (nsec): min=1589, max=439788, avg=2107.31, stdev=4034.31 00:33:13.198 clat (usec): min=3655, max=11557, avg=6698.05, stdev=624.20 00:33:13.198 lat (usec): min=3657, max=11559, avg=6700.16, stdev=624.31 00:33:13.198 clat percentiles (usec): 00:33:13.198 | 1.00th=[ 5342], 5.00th=[ 5866], 10.00th=[ 6063], 20.00th=[ 6259], 00:33:13.198 | 30.00th=[ 6390], 40.00th=[ 6521], 50.00th=[ 6652], 60.00th=[ 6783], 00:33:13.198 | 70.00th=[ 6915], 80.00th=[ 7046], 90.00th=[ 7439], 95.00th=[ 7767], 00:33:13.198 | 99.00th=[ 8717], 99.50th=[ 8979], 99.90th=[10159], 99.95th=[10945], 00:33:13.198 | 99.99th=[11338] 00:33:13.198 bw ( KiB/s): min=39264, max=40512, per=99.95%, avg=39908.00, stdev=651.25, samples=4 00:33:13.198 iops : min= 9816, max=10128, avg=9977.00, stdev=162.81, samples=4 00:33:13.198 write: IOPS=9997, BW=39.1MiB/s (40.9MB/s)(78.3MiB/2006msec); 0 zone resets 00:33:13.198 slat (nsec): min=1628, max=333505, avg=2183.60, stdev=2652.01 00:33:13.198 clat (usec): min=2730, max=11129, avg=6054.86, stdev=566.83 00:33:13.198 lat (usec): min=2732, max=11131, avg=6057.04, stdev=567.03 00:33:13.198 clat percentiles (usec): 00:33:13.198 | 1.00th=[ 4555], 5.00th=[ 5276], 10.00th=[ 5473], 20.00th=[ 5669], 00:33:13.198 | 30.00th=[ 5800], 40.00th=[ 5932], 50.00th=[ 5997], 60.00th=[ 6128], 00:33:13.198 | 70.00th=[ 6259], 80.00th=[ 6390], 90.00th=[ 6652], 95.00th=[ 6980], 00:33:13.198 | 99.00th=[ 7767], 99.50th=[ 8029], 99.90th=[ 8979], 99.95th=[10290], 00:33:13.198 | 99.99th=[11076] 00:33:13.198 bw ( KiB/s): min=38848, max=41296, per=100.00%, avg=39992.00, stdev=1009.78, samples=4 00:33:13.198 iops : min= 9712, max=10324, avg=9998.00, stdev=252.44, samples=4 00:33:13.198 lat (msec) : 4=0.35%, 10=99.55%, 20=0.10% 00:33:13.198 cpu : usr=74.81%, sys=18.95%, ctx=23, majf=0, minf=5 00:33:13.198 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:33:13.198 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:13.198 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:13.198 issued rwts: total=20023,20054,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:13.198 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:13.198 00:33:13.198 Run status group 0 (all jobs): 00:33:13.198 READ: bw=39.0MiB/s (40.9MB/s), 39.0MiB/s-39.0MiB/s (40.9MB/s-40.9MB/s), io=78.2MiB (82.0MB), run=2006-2006msec 00:33:13.198 WRITE: bw=39.1MiB/s (40.9MB/s), 39.1MiB/s-39.1MiB/s (40.9MB/s-40.9MB/s), io=78.3MiB (82.1MB), run=2006-2006msec 00:33:13.198 08:29:46 -- host/fio.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:33:13.198 08:29:46 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:33:13.198 08:29:46 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:33:13.198 08:29:46 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:13.198 08:29:46 -- common/autotest_common.sh@1318 -- # local sanitizers 00:33:13.198 08:29:46 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:33:13.198 08:29:46 -- common/autotest_common.sh@1320 -- # shift 00:33:13.198 08:29:46 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:33:13.198 08:29:46 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:33:13.198 08:29:46 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:33:13.198 08:29:46 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:33:13.198 08:29:46 -- common/autotest_common.sh@1324 -- # grep libasan 00:33:13.198 08:29:46 -- common/autotest_common.sh@1324 -- # asan_lib= 00:33:13.198 08:29:46 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:33:13.198 08:29:46 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:33:13.198 08:29:46 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:33:13.198 08:29:46 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:33:13.198 08:29:46 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:33:13.198 08:29:46 -- common/autotest_common.sh@1324 -- # asan_lib= 00:33:13.198 08:29:46 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:33:13.198 08:29:46 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:33:13.198 08:29:46 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:33:13.198 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:33:13.198 fio-3.35 00:33:13.198 Starting 1 thread 00:33:15.764 00:33:15.764 test: (groupid=0, jobs=1): err= 0: pid=69653: Wed Apr 17 08:29:48 2024 00:33:15.764 read: IOPS=9115, BW=142MiB/s (149MB/s)(285MiB/2001msec) 00:33:15.764 slat (usec): min=2, max=114, avg= 3.45, stdev= 1.78 00:33:15.764 clat (usec): min=161, max=16259, avg=7812.48, stdev=2339.41 00:33:15.764 lat (usec): min=173, max=16262, avg=7815.93, stdev=2339.62 00:33:15.764 clat percentiles (usec): 00:33:15.764 | 1.00th=[ 3818], 5.00th=[ 4490], 10.00th=[ 4883], 20.00th=[ 5735], 00:33:15.764 | 30.00th=[ 6390], 40.00th=[ 6980], 50.00th=[ 7635], 60.00th=[ 8225], 00:33:15.764 | 70.00th=[ 8979], 80.00th=[ 9634], 90.00th=[10683], 95.00th=[11994], 00:33:15.764 | 99.00th=[14615], 99.50th=[15139], 99.90th=[15795], 99.95th=[15926], 00:33:15.764 | 99.99th=[16188] 00:33:15.764 bw ( KiB/s): min=66528, max=77477, per=49.70%, avg=72492.33, stdev=5539.85, samples=3 00:33:15.764 iops : min= 4158, max= 4842, avg=4530.67, stdev=346.10, samples=3 00:33:15.764 write: IOPS=5298, BW=82.8MiB/s (86.8MB/s)(145MiB/1756msec); 0 zone resets 00:33:15.764 slat (usec): min=33, max=254, avg=37.67, stdev= 7.50 00:33:15.764 clat (usec): min=3772, max=24806, avg=11252.56, stdev=2398.20 00:33:15.764 lat (usec): min=3809, max=24849, avg=11290.23, stdev=2400.34 00:33:15.764 clat percentiles (usec): 00:33:15.764 | 1.00th=[ 7111], 5.00th=[ 8160], 10.00th=[ 8717], 20.00th=[ 9372], 00:33:15.764 | 30.00th=[ 9896], 40.00th=[10421], 50.00th=[10945], 60.00th=[11469], 00:33:15.764 | 70.00th=[12125], 80.00th=[12911], 90.00th=[13960], 95.00th=[15008], 00:33:15.764 | 99.00th=[19268], 99.50th=[23725], 99.90th=[24249], 99.95th=[24511], 00:33:15.764 | 99.99th=[24773] 00:33:15.764 bw ( KiB/s): min=69600, max=80734, per=89.07%, avg=75519.33, stdev=5600.35, samples=3 00:33:15.764 iops : min= 4350, max= 5045, avg=4719.67, stdev=349.61, samples=3 00:33:15.764 lat (usec) : 250=0.01% 00:33:15.764 lat (msec) : 2=0.01%, 4=1.09%, 10=65.38%, 20=33.22%, 50=0.31% 00:33:15.764 cpu : usr=84.85%, sys=11.50%, ctx=5, majf=0, minf=27 00:33:15.764 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:33:15.764 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:15.764 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:15.764 issued rwts: total=18241,9305,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:15.764 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:15.764 00:33:15.764 Run status group 0 (all jobs): 00:33:15.764 READ: bw=142MiB/s (149MB/s), 142MiB/s-142MiB/s (149MB/s-149MB/s), io=285MiB (299MB), run=2001-2001msec 00:33:15.764 WRITE: bw=82.8MiB/s (86.8MB/s), 82.8MiB/s-82.8MiB/s (86.8MB/s-86.8MB/s), io=145MiB (152MB), run=1756-1756msec 00:33:15.764 08:29:48 -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:15.764 08:29:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:15.764 08:29:48 -- common/autotest_common.sh@10 -- # set +x 00:33:15.764 08:29:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:15.764 08:29:48 -- host/fio.sh@47 -- # '[' 1 -eq 1 ']' 00:33:15.764 08:29:48 -- host/fio.sh@49 -- # bdfs=($(get_nvme_bdfs)) 00:33:15.764 08:29:48 -- host/fio.sh@49 -- # get_nvme_bdfs 00:33:15.764 08:29:48 -- common/autotest_common.sh@1498 -- # bdfs=() 00:33:15.764 08:29:48 -- common/autotest_common.sh@1498 -- # local bdfs 00:33:15.764 08:29:48 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:33:15.764 08:29:48 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:33:15.764 08:29:48 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:33:15.764 08:29:48 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:33:15.764 08:29:48 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:33:15.764 08:29:48 -- host/fio.sh@50 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 -i 10.0.0.2 00:33:15.764 08:29:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:15.764 08:29:48 -- common/autotest_common.sh@10 -- # set +x 00:33:15.764 Nvme0n1 00:33:15.764 08:29:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:15.764 08:29:48 -- host/fio.sh@51 -- # rpc_cmd bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:33:15.764 08:29:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:15.764 08:29:48 -- common/autotest_common.sh@10 -- # set +x 00:33:15.764 08:29:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:15.764 08:29:48 -- host/fio.sh@51 -- # ls_guid=6aec9cd7-c297-404d-bb2b-e0a36c8b7ee0 00:33:15.764 08:29:48 -- host/fio.sh@52 -- # get_lvs_free_mb 6aec9cd7-c297-404d-bb2b-e0a36c8b7ee0 00:33:15.764 08:29:48 -- common/autotest_common.sh@1343 -- # local lvs_uuid=6aec9cd7-c297-404d-bb2b-e0a36c8b7ee0 00:33:15.764 08:29:48 -- common/autotest_common.sh@1344 -- # local lvs_info 00:33:15.764 08:29:48 -- common/autotest_common.sh@1345 -- # local fc 00:33:15.764 08:29:48 -- common/autotest_common.sh@1346 -- # local cs 00:33:15.764 08:29:48 -- common/autotest_common.sh@1347 -- # rpc_cmd bdev_lvol_get_lvstores 00:33:15.764 08:29:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:15.764 08:29:48 -- common/autotest_common.sh@10 -- # set +x 00:33:15.764 08:29:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:15.764 08:29:48 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:33:15.764 { 00:33:15.764 "uuid": "6aec9cd7-c297-404d-bb2b-e0a36c8b7ee0", 00:33:15.764 "name": "lvs_0", 00:33:15.764 "base_bdev": "Nvme0n1", 00:33:15.764 "total_data_clusters": 4, 00:33:15.764 "free_clusters": 4, 00:33:15.764 "block_size": 4096, 00:33:15.764 "cluster_size": 1073741824 00:33:15.764 } 00:33:15.764 ]' 00:33:15.764 08:29:48 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="6aec9cd7-c297-404d-bb2b-e0a36c8b7ee0") .free_clusters' 00:33:15.764 08:29:48 -- common/autotest_common.sh@1348 -- # fc=4 00:33:15.764 08:29:48 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="6aec9cd7-c297-404d-bb2b-e0a36c8b7ee0") .cluster_size' 00:33:15.764 4096 00:33:15.764 08:29:48 -- common/autotest_common.sh@1349 -- # cs=1073741824 00:33:15.764 08:29:48 -- common/autotest_common.sh@1352 -- # free_mb=4096 00:33:15.764 08:29:48 -- common/autotest_common.sh@1353 -- # echo 4096 00:33:15.764 08:29:48 -- host/fio.sh@53 -- # rpc_cmd bdev_lvol_create -l lvs_0 lbd_0 4096 00:33:15.764 08:29:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:15.764 08:29:48 -- common/autotest_common.sh@10 -- # set +x 00:33:15.764 6a17f6aa-227d-4cbf-9b89-0c000b6622ce 00:33:15.764 08:29:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:15.764 08:29:49 -- host/fio.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:33:15.764 08:29:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:15.764 08:29:49 -- common/autotest_common.sh@10 -- # set +x 00:33:15.764 08:29:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:15.764 08:29:49 -- host/fio.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:33:15.764 08:29:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:15.764 08:29:49 -- common/autotest_common.sh@10 -- # set +x 00:33:15.764 08:29:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:15.764 08:29:49 -- host/fio.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:33:15.764 08:29:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:15.764 08:29:49 -- common/autotest_common.sh@10 -- # set +x 00:33:15.764 08:29:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:15.764 08:29:49 -- host/fio.sh@57 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:15.764 08:29:49 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:15.764 08:29:49 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:33:15.764 08:29:49 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:15.765 08:29:49 -- common/autotest_common.sh@1318 -- # local sanitizers 00:33:15.765 08:29:49 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:33:15.765 08:29:49 -- common/autotest_common.sh@1320 -- # shift 00:33:15.765 08:29:49 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:33:15.765 08:29:49 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:33:15.765 08:29:49 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:33:15.765 08:29:49 -- common/autotest_common.sh@1324 -- # grep libasan 00:33:15.765 08:29:49 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:33:15.765 08:29:49 -- common/autotest_common.sh@1324 -- # asan_lib= 00:33:15.765 08:29:49 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:33:15.765 08:29:49 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:33:15.765 08:29:49 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:33:15.765 08:29:49 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:33:15.765 08:29:49 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:33:16.023 08:29:49 -- common/autotest_common.sh@1324 -- # asan_lib= 00:33:16.023 08:29:49 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:33:16.023 08:29:49 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:33:16.023 08:29:49 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:16.023 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:33:16.023 fio-3.35 00:33:16.023 Starting 1 thread 00:33:18.557 00:33:18.557 test: (groupid=0, jobs=1): err= 0: pid=69726: Wed Apr 17 08:29:51 2024 00:33:18.557 read: IOPS=7111, BW=27.8MiB/s (29.1MB/s)(55.8MiB/2007msec) 00:33:18.557 slat (nsec): min=1781, max=419464, avg=2138.09, stdev=4472.74 00:33:18.557 clat (usec): min=3527, max=16121, avg=9438.63, stdev=799.83 00:33:18.557 lat (usec): min=3541, max=16123, avg=9440.77, stdev=799.48 00:33:18.557 clat percentiles (usec): 00:33:18.557 | 1.00th=[ 7635], 5.00th=[ 8225], 10.00th=[ 8586], 20.00th=[ 8848], 00:33:18.557 | 30.00th=[ 9110], 40.00th=[ 9241], 50.00th=[ 9372], 60.00th=[ 9634], 00:33:18.557 | 70.00th=[ 9765], 80.00th=[10028], 90.00th=[10421], 95.00th=[10683], 00:33:18.557 | 99.00th=[11469], 99.50th=[11863], 99.90th=[15008], 99.95th=[15270], 00:33:18.557 | 99.99th=[16057] 00:33:18.557 bw ( KiB/s): min=27648, max=28984, per=99.84%, avg=28402.00, stdev=604.78, samples=4 00:33:18.557 iops : min= 6912, max= 7246, avg=7100.50, stdev=151.19, samples=4 00:33:18.557 write: IOPS=7107, BW=27.8MiB/s (29.1MB/s)(55.7MiB/2007msec); 0 zone resets 00:33:18.557 slat (nsec): min=1868, max=325309, avg=2215.62, stdev=3001.52 00:33:18.557 clat (usec): min=3376, max=15381, avg=8511.59, stdev=725.82 00:33:18.557 lat (usec): min=3394, max=15383, avg=8513.80, stdev=725.62 00:33:18.557 clat percentiles (usec): 00:33:18.557 | 1.00th=[ 6783], 5.00th=[ 7439], 10.00th=[ 7635], 20.00th=[ 7963], 00:33:18.557 | 30.00th=[ 8160], 40.00th=[ 8356], 50.00th=[ 8455], 60.00th=[ 8717], 00:33:18.557 | 70.00th=[ 8848], 80.00th=[ 9110], 90.00th=[ 9372], 95.00th=[ 9634], 00:33:18.557 | 99.00th=[10159], 99.50th=[10421], 99.90th=[13042], 99.95th=[14222], 00:33:18.557 | 99.99th=[15270] 00:33:18.557 bw ( KiB/s): min=28160, max=28560, per=99.96%, avg=28418.00, stdev=185.03, samples=4 00:33:18.557 iops : min= 7040, max= 7140, avg=7104.50, stdev=46.26, samples=4 00:33:18.557 lat (msec) : 4=0.03%, 10=88.45%, 20=11.52% 00:33:18.557 cpu : usr=78.61%, sys=17.10%, ctx=5, majf=0, minf=29 00:33:18.557 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:33:18.557 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:18.557 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:18.557 issued rwts: total=14273,14264,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:18.557 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:18.557 00:33:18.557 Run status group 0 (all jobs): 00:33:18.557 READ: bw=27.8MiB/s (29.1MB/s), 27.8MiB/s-27.8MiB/s (29.1MB/s-29.1MB/s), io=55.8MiB (58.5MB), run=2007-2007msec 00:33:18.557 WRITE: bw=27.8MiB/s (29.1MB/s), 27.8MiB/s-27.8MiB/s (29.1MB/s-29.1MB/s), io=55.7MiB (58.4MB), run=2007-2007msec 00:33:18.557 08:29:51 -- host/fio.sh@59 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:33:18.557 08:29:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:18.557 08:29:51 -- common/autotest_common.sh@10 -- # set +x 00:33:18.557 08:29:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:18.557 08:29:51 -- host/fio.sh@62 -- # rpc_cmd bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:33:18.557 08:29:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:18.557 08:29:51 -- common/autotest_common.sh@10 -- # set +x 00:33:18.557 08:29:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:18.557 08:29:51 -- host/fio.sh@62 -- # ls_nested_guid=be483738-bd5d-4020-b777-5475ac8431bb 00:33:18.557 08:29:51 -- host/fio.sh@63 -- # get_lvs_free_mb be483738-bd5d-4020-b777-5475ac8431bb 00:33:18.557 08:29:51 -- common/autotest_common.sh@1343 -- # local lvs_uuid=be483738-bd5d-4020-b777-5475ac8431bb 00:33:18.557 08:29:51 -- common/autotest_common.sh@1344 -- # local lvs_info 00:33:18.557 08:29:51 -- common/autotest_common.sh@1345 -- # local fc 00:33:18.557 08:29:51 -- common/autotest_common.sh@1346 -- # local cs 00:33:18.557 08:29:51 -- common/autotest_common.sh@1347 -- # rpc_cmd bdev_lvol_get_lvstores 00:33:18.557 08:29:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:18.557 08:29:51 -- common/autotest_common.sh@10 -- # set +x 00:33:18.557 08:29:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:18.557 08:29:51 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:33:18.557 { 00:33:18.557 "uuid": "6aec9cd7-c297-404d-bb2b-e0a36c8b7ee0", 00:33:18.557 "name": "lvs_0", 00:33:18.557 "base_bdev": "Nvme0n1", 00:33:18.557 "total_data_clusters": 4, 00:33:18.557 "free_clusters": 0, 00:33:18.557 "block_size": 4096, 00:33:18.557 "cluster_size": 1073741824 00:33:18.557 }, 00:33:18.557 { 00:33:18.557 "uuid": "be483738-bd5d-4020-b777-5475ac8431bb", 00:33:18.557 "name": "lvs_n_0", 00:33:18.557 "base_bdev": "6a17f6aa-227d-4cbf-9b89-0c000b6622ce", 00:33:18.557 "total_data_clusters": 1022, 00:33:18.557 "free_clusters": 1022, 00:33:18.557 "block_size": 4096, 00:33:18.557 "cluster_size": 4194304 00:33:18.557 } 00:33:18.557 ]' 00:33:18.557 08:29:51 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="be483738-bd5d-4020-b777-5475ac8431bb") .free_clusters' 00:33:18.557 08:29:51 -- common/autotest_common.sh@1348 -- # fc=1022 00:33:18.557 08:29:51 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="be483738-bd5d-4020-b777-5475ac8431bb") .cluster_size' 00:33:18.557 08:29:51 -- common/autotest_common.sh@1349 -- # cs=4194304 00:33:18.557 08:29:51 -- common/autotest_common.sh@1352 -- # free_mb=4088 00:33:18.557 08:29:51 -- common/autotest_common.sh@1353 -- # echo 4088 00:33:18.557 4088 00:33:18.557 08:29:51 -- host/fio.sh@64 -- # rpc_cmd bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:33:18.557 08:29:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:18.557 08:29:51 -- common/autotest_common.sh@10 -- # set +x 00:33:18.557 11fd8d7f-1107-4fa3-aba0-fde0250d8cde 00:33:18.557 08:29:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:18.557 08:29:51 -- host/fio.sh@65 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:33:18.557 08:29:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:18.557 08:29:51 -- common/autotest_common.sh@10 -- # set +x 00:33:18.557 08:29:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:18.557 08:29:51 -- host/fio.sh@66 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:33:18.557 08:29:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:18.557 08:29:51 -- common/autotest_common.sh@10 -- # set +x 00:33:18.557 08:29:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:18.557 08:29:51 -- host/fio.sh@67 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:33:18.557 08:29:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:18.557 08:29:51 -- common/autotest_common.sh@10 -- # set +x 00:33:18.557 08:29:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:18.557 08:29:51 -- host/fio.sh@68 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:18.557 08:29:51 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:18.557 08:29:51 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:33:18.557 08:29:51 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:18.557 08:29:51 -- common/autotest_common.sh@1318 -- # local sanitizers 00:33:18.557 08:29:51 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:33:18.557 08:29:51 -- common/autotest_common.sh@1320 -- # shift 00:33:18.557 08:29:51 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:33:18.557 08:29:51 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:33:18.557 08:29:51 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:33:18.557 08:29:51 -- common/autotest_common.sh@1324 -- # grep libasan 00:33:18.557 08:29:51 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:33:18.557 08:29:51 -- common/autotest_common.sh@1324 -- # asan_lib= 00:33:18.557 08:29:51 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:33:18.557 08:29:51 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:33:18.557 08:29:51 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:33:18.557 08:29:51 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:33:18.557 08:29:51 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:33:18.557 08:29:51 -- common/autotest_common.sh@1324 -- # asan_lib= 00:33:18.557 08:29:51 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:33:18.557 08:29:51 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:33:18.558 08:29:51 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:18.816 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:33:18.816 fio-3.35 00:33:18.816 Starting 1 thread 00:33:21.350 00:33:21.350 test: (groupid=0, jobs=1): err= 0: pid=69781: Wed Apr 17 08:29:54 2024 00:33:21.350 read: IOPS=6069, BW=23.7MiB/s (24.9MB/s)(47.6MiB/2009msec) 00:33:21.350 slat (nsec): min=1644, max=311694, avg=2914.12, stdev=3982.15 00:33:21.350 clat (usec): min=2987, max=20336, avg=11025.08, stdev=1048.61 00:33:21.350 lat (usec): min=2996, max=20338, avg=11027.99, stdev=1048.70 00:33:21.350 clat percentiles (usec): 00:33:21.350 | 1.00th=[ 8356], 5.00th=[ 9503], 10.00th=[ 9896], 20.00th=[10290], 00:33:21.350 | 30.00th=[10552], 40.00th=[10814], 50.00th=[11076], 60.00th=[11207], 00:33:21.350 | 70.00th=[11469], 80.00th=[11731], 90.00th=[12256], 95.00th=[12649], 00:33:21.350 | 99.00th=[13566], 99.50th=[14222], 99.90th=[17433], 99.95th=[19268], 00:33:21.350 | 99.99th=[20317] 00:33:21.350 bw ( KiB/s): min=23304, max=24784, per=99.88%, avg=24248.00, stdev=693.22, samples=4 00:33:21.350 iops : min= 5826, max= 6196, avg=6062.00, stdev=173.31, samples=4 00:33:21.350 write: IOPS=6050, BW=23.6MiB/s (24.8MB/s)(47.5MiB/2009msec); 0 zone resets 00:33:21.350 slat (nsec): min=1697, max=217301, avg=3006.98, stdev=3036.98 00:33:21.350 clat (usec): min=2154, max=17711, avg=9988.41, stdev=1004.59 00:33:21.350 lat (usec): min=2166, max=17713, avg=9991.42, stdev=1005.17 00:33:21.350 clat percentiles (usec): 00:33:21.350 | 1.00th=[ 7504], 5.00th=[ 8586], 10.00th=[ 8848], 20.00th=[ 9241], 00:33:21.350 | 30.00th=[ 9503], 40.00th=[ 9765], 50.00th=[10028], 60.00th=[10159], 00:33:21.350 | 70.00th=[10421], 80.00th=[10683], 90.00th=[11207], 95.00th=[11600], 00:33:21.350 | 99.00th=[12649], 99.50th=[13042], 99.90th=[15795], 99.95th=[17433], 00:33:21.350 | 99.99th=[17695] 00:33:21.350 bw ( KiB/s): min=23808, max=24392, per=99.97%, avg=24194.00, stdev=263.26, samples=4 00:33:21.350 iops : min= 5952, max= 6098, avg=6048.50, stdev=65.82, samples=4 00:33:21.350 lat (msec) : 4=0.05%, 10=31.99%, 20=67.94%, 50=0.02% 00:33:21.350 cpu : usr=78.49%, sys=16.53%, ctx=56, majf=0, minf=29 00:33:21.350 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:33:21.350 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:21.350 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:21.350 issued rwts: total=12193,12155,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:21.350 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:21.350 00:33:21.350 Run status group 0 (all jobs): 00:33:21.350 READ: bw=23.7MiB/s (24.9MB/s), 23.7MiB/s-23.7MiB/s (24.9MB/s-24.9MB/s), io=47.6MiB (49.9MB), run=2009-2009msec 00:33:21.350 WRITE: bw=23.6MiB/s (24.8MB/s), 23.6MiB/s-23.6MiB/s (24.8MB/s-24.8MB/s), io=47.5MiB (49.8MB), run=2009-2009msec 00:33:21.350 08:29:54 -- host/fio.sh@70 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:33:21.350 08:29:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:21.350 08:29:54 -- common/autotest_common.sh@10 -- # set +x 00:33:21.350 08:29:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:21.350 08:29:54 -- host/fio.sh@72 -- # sync 00:33:21.350 08:29:54 -- host/fio.sh@74 -- # rpc_cmd bdev_lvol_delete lvs_n_0/lbd_nest_0 00:33:21.350 08:29:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:21.350 08:29:54 -- common/autotest_common.sh@10 -- # set +x 00:33:21.350 08:29:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:21.350 08:29:54 -- host/fio.sh@75 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_n_0 00:33:21.350 08:29:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:21.351 08:29:54 -- common/autotest_common.sh@10 -- # set +x 00:33:21.351 08:29:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:21.351 08:29:54 -- host/fio.sh@76 -- # rpc_cmd bdev_lvol_delete lvs_0/lbd_0 00:33:21.351 08:29:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:21.351 08:29:54 -- common/autotest_common.sh@10 -- # set +x 00:33:21.351 08:29:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:21.351 08:29:54 -- host/fio.sh@77 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_0 00:33:21.351 08:29:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:21.351 08:29:54 -- common/autotest_common.sh@10 -- # set +x 00:33:21.351 08:29:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:21.351 08:29:54 -- host/fio.sh@78 -- # rpc_cmd bdev_nvme_detach_controller Nvme0 00:33:21.351 08:29:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:21.351 08:29:54 -- common/autotest_common.sh@10 -- # set +x 00:33:23.272 08:29:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:23.272 08:29:56 -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:33:23.272 08:29:56 -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:33:23.272 08:29:56 -- host/fio.sh@84 -- # nvmftestfini 00:33:23.272 08:29:56 -- nvmf/common.sh@476 -- # nvmfcleanup 00:33:23.272 08:29:56 -- nvmf/common.sh@116 -- # sync 00:33:23.272 08:29:56 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:33:23.272 08:29:56 -- nvmf/common.sh@119 -- # set +e 00:33:23.272 08:29:56 -- nvmf/common.sh@120 -- # for i in {1..20} 00:33:23.272 08:29:56 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:33:23.272 rmmod nvme_tcp 00:33:23.272 rmmod nvme_fabrics 00:33:23.272 rmmod nvme_keyring 00:33:23.272 08:29:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:33:23.272 08:29:56 -- nvmf/common.sh@123 -- # set -e 00:33:23.272 08:29:56 -- nvmf/common.sh@124 -- # return 0 00:33:23.272 08:29:56 -- nvmf/common.sh@477 -- # '[' -n 69544 ']' 00:33:23.272 08:29:56 -- nvmf/common.sh@478 -- # killprocess 69544 00:33:23.272 08:29:56 -- common/autotest_common.sh@926 -- # '[' -z 69544 ']' 00:33:23.272 08:29:56 -- common/autotest_common.sh@930 -- # kill -0 69544 00:33:23.272 08:29:56 -- common/autotest_common.sh@931 -- # uname 00:33:23.272 08:29:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:23.272 08:29:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69544 00:33:23.272 killing process with pid 69544 00:33:23.272 08:29:56 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:33:23.272 08:29:56 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:33:23.272 08:29:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69544' 00:33:23.272 08:29:56 -- common/autotest_common.sh@945 -- # kill 69544 00:33:23.272 08:29:56 -- common/autotest_common.sh@950 -- # wait 69544 00:33:23.272 08:29:56 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:33:23.272 08:29:56 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:33:23.272 08:29:56 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:33:23.272 08:29:56 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:23.272 08:29:56 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:33:23.272 08:29:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:23.272 08:29:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:23.272 08:29:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:23.272 08:29:56 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:33:23.272 00:33:23.272 real 0m14.496s 00:33:23.272 user 1m0.351s 00:33:23.272 sys 0m3.430s 00:33:23.272 08:29:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:23.272 08:29:56 -- common/autotest_common.sh@10 -- # set +x 00:33:23.272 ************************************ 00:33:23.272 END TEST nvmf_fio_host 00:33:23.272 ************************************ 00:33:23.531 08:29:56 -- nvmf/nvmf.sh@99 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:33:23.531 08:29:56 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:33:23.531 08:29:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:23.531 08:29:56 -- common/autotest_common.sh@10 -- # set +x 00:33:23.531 ************************************ 00:33:23.531 START TEST nvmf_failover 00:33:23.531 ************************************ 00:33:23.531 08:29:56 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:33:23.531 * Looking for test storage... 00:33:23.531 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:33:23.531 08:29:56 -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:33:23.531 08:29:56 -- nvmf/common.sh@7 -- # uname -s 00:33:23.531 08:29:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:23.531 08:29:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:23.531 08:29:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:23.531 08:29:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:23.531 08:29:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:23.531 08:29:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:23.531 08:29:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:23.531 08:29:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:23.531 08:29:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:23.531 08:29:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:23.531 08:29:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ce38300-f67f-48af-81f9-d51a7c54746d 00:33:23.532 08:29:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ce38300-f67f-48af-81f9-d51a7c54746d 00:33:23.532 08:29:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:23.532 08:29:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:23.532 08:29:56 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:33:23.532 08:29:56 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:23.532 08:29:56 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:23.532 08:29:56 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:23.532 08:29:56 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:23.532 08:29:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.532 08:29:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.532 08:29:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.532 08:29:56 -- paths/export.sh@5 -- # export PATH 00:33:23.532 08:29:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.532 08:29:56 -- nvmf/common.sh@46 -- # : 0 00:33:23.532 08:29:56 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:33:23.532 08:29:56 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:33:23.532 08:29:56 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:33:23.532 08:29:56 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:23.532 08:29:56 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:23.532 08:29:56 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:33:23.532 08:29:56 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:33:23.532 08:29:56 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:33:23.532 08:29:56 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:23.532 08:29:56 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:23.532 08:29:56 -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:23.532 08:29:56 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:23.532 08:29:56 -- host/failover.sh@18 -- # nvmftestinit 00:33:23.532 08:29:56 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:33:23.532 08:29:56 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:23.532 08:29:56 -- nvmf/common.sh@436 -- # prepare_net_devs 00:33:23.532 08:29:56 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:33:23.532 08:29:56 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:33:23.532 08:29:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:23.532 08:29:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:23.532 08:29:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:23.532 08:29:56 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:33:23.532 08:29:56 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:33:23.532 08:29:56 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:33:23.532 08:29:56 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:33:23.532 08:29:56 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:33:23.532 08:29:56 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:33:23.532 08:29:56 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:23.532 08:29:56 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:23.532 08:29:56 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:33:23.532 08:29:56 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:33:23.532 08:29:56 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:33:23.532 08:29:56 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:33:23.532 08:29:56 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:33:23.532 08:29:56 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:23.532 08:29:56 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:33:23.532 08:29:56 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:33:23.532 08:29:56 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:33:23.532 08:29:56 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:33:23.532 08:29:56 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:33:23.532 08:29:56 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:33:23.532 Cannot find device "nvmf_tgt_br" 00:33:23.532 08:29:56 -- nvmf/common.sh@154 -- # true 00:33:23.532 08:29:56 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:33:23.532 Cannot find device "nvmf_tgt_br2" 00:33:23.532 08:29:56 -- nvmf/common.sh@155 -- # true 00:33:23.532 08:29:56 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:33:23.532 08:29:56 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:33:23.792 Cannot find device "nvmf_tgt_br" 00:33:23.792 08:29:56 -- nvmf/common.sh@157 -- # true 00:33:23.792 08:29:56 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:33:23.792 Cannot find device "nvmf_tgt_br2" 00:33:23.792 08:29:56 -- nvmf/common.sh@158 -- # true 00:33:23.792 08:29:56 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:33:23.792 08:29:56 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:33:23.792 08:29:56 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:33:23.792 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:23.792 08:29:56 -- nvmf/common.sh@161 -- # true 00:33:23.792 08:29:56 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:33:23.792 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:23.792 08:29:56 -- nvmf/common.sh@162 -- # true 00:33:23.792 08:29:56 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:33:23.792 08:29:56 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:33:23.792 08:29:56 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:33:23.792 08:29:57 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:33:23.792 08:29:57 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:33:23.792 08:29:57 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:33:23.792 08:29:57 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:33:23.792 08:29:57 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:33:23.792 08:29:57 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:33:23.792 08:29:57 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:33:23.792 08:29:57 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:33:23.792 08:29:57 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:33:23.792 08:29:57 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:33:23.792 08:29:57 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:33:23.792 08:29:57 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:33:23.792 08:29:57 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:33:23.792 08:29:57 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:33:23.792 08:29:57 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:33:23.792 08:29:57 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:33:24.052 08:29:57 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:33:24.052 08:29:57 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:33:24.052 08:29:57 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:33:24.052 08:29:57 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:33:24.052 08:29:57 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:33:24.052 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:24.052 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:33:24.052 00:33:24.052 --- 10.0.0.2 ping statistics --- 00:33:24.052 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:24.052 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:33:24.052 08:29:57 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:33:24.052 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:33:24.052 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.091 ms 00:33:24.052 00:33:24.052 --- 10.0.0.3 ping statistics --- 00:33:24.052 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:24.052 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:33:24.052 08:29:57 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:33:24.052 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:24.052 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:33:24.052 00:33:24.052 --- 10.0.0.1 ping statistics --- 00:33:24.052 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:24.052 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:33:24.052 08:29:57 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:24.052 08:29:57 -- nvmf/common.sh@421 -- # return 0 00:33:24.052 08:29:57 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:33:24.052 08:29:57 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:24.052 08:29:57 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:33:24.052 08:29:57 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:33:24.052 08:29:57 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:24.052 08:29:57 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:33:24.052 08:29:57 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:33:24.052 08:29:57 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:33:24.052 08:29:57 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:33:24.052 08:29:57 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:24.052 08:29:57 -- common/autotest_common.sh@10 -- # set +x 00:33:24.052 08:29:57 -- nvmf/common.sh@469 -- # nvmfpid=70015 00:33:24.052 08:29:57 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:24.052 08:29:57 -- nvmf/common.sh@470 -- # waitforlisten 70015 00:33:24.052 08:29:57 -- common/autotest_common.sh@819 -- # '[' -z 70015 ']' 00:33:24.052 08:29:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:24.052 08:29:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:24.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:24.052 08:29:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:24.052 08:29:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:24.052 08:29:57 -- common/autotest_common.sh@10 -- # set +x 00:33:24.052 [2024-04-17 08:29:57.268613] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:33:24.052 [2024-04-17 08:29:57.268691] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:24.312 [2024-04-17 08:29:57.409803] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:24.312 [2024-04-17 08:29:57.560401] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:33:24.312 [2024-04-17 08:29:57.560544] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:24.312 [2024-04-17 08:29:57.560550] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:24.312 [2024-04-17 08:29:57.560556] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:24.312 [2024-04-17 08:29:57.560705] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:24.312 [2024-04-17 08:29:57.560999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:24.312 [2024-04-17 08:29:57.561005] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:33:24.879 08:29:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:24.879 08:29:58 -- common/autotest_common.sh@852 -- # return 0 00:33:24.879 08:29:58 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:33:24.879 08:29:58 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:24.879 08:29:58 -- common/autotest_common.sh@10 -- # set +x 00:33:24.879 08:29:58 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:24.879 08:29:58 -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:25.138 [2024-04-17 08:29:58.317446] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:25.138 08:29:58 -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:33:25.397 Malloc0 00:33:25.397 08:29:58 -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:25.655 08:29:58 -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:25.655 08:29:58 -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:25.913 [2024-04-17 08:29:59.136815] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:25.913 08:29:59 -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:26.171 [2024-04-17 08:29:59.328615] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:26.171 08:29:59 -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:33:26.429 [2024-04-17 08:29:59.524665] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:33:26.429 08:29:59 -- host/failover.sh@31 -- # bdevperf_pid=70069 00:33:26.429 08:29:59 -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:33:26.429 08:29:59 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:26.429 08:29:59 -- host/failover.sh@34 -- # waitforlisten 70069 /var/tmp/bdevperf.sock 00:33:26.429 08:29:59 -- common/autotest_common.sh@819 -- # '[' -z 70069 ']' 00:33:26.429 08:29:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:26.429 08:29:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:26.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:26.429 08:29:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:26.429 08:29:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:26.429 08:29:59 -- common/autotest_common.sh@10 -- # set +x 00:33:27.364 08:30:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:27.364 08:30:00 -- common/autotest_common.sh@852 -- # return 0 00:33:27.364 08:30:00 -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:27.622 NVMe0n1 00:33:27.622 08:30:00 -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:27.879 00:33:27.879 08:30:01 -- host/failover.sh@39 -- # run_test_pid=70093 00:33:27.879 08:30:01 -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:27.879 08:30:01 -- host/failover.sh@41 -- # sleep 1 00:33:28.810 08:30:02 -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:29.068 [2024-04-17 08:30:02.233632] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfd4a0 is same with the state(5) to be set 00:33:29.068 [2024-04-17 08:30:02.233686] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfd4a0 is same with the state(5) to be set 00:33:29.068 [2024-04-17 08:30:02.233693] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfd4a0 is same with the state(5) to be set 00:33:29.068 [2024-04-17 08:30:02.233699] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfd4a0 is same with the state(5) to be set 00:33:29.068 [2024-04-17 08:30:02.233705] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfd4a0 is same with the state(5) to be set 00:33:29.068 [2024-04-17 08:30:02.233711] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfd4a0 is same with the state(5) to be set 00:33:29.068 [2024-04-17 08:30:02.233716] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfd4a0 is same with the state(5) to be set 00:33:29.068 [2024-04-17 08:30:02.233721] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfd4a0 is same with the state(5) to be set 00:33:29.068 [2024-04-17 08:30:02.233727] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfd4a0 is same with the state(5) to be set 00:33:29.068 [2024-04-17 08:30:02.233733] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfd4a0 is same with the state(5) to be set 00:33:29.068 [2024-04-17 08:30:02.233738] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfd4a0 is same with the state(5) to be set 00:33:29.068 [2024-04-17 08:30:02.233743] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfd4a0 is same with the state(5) to be set 00:33:29.068 [2024-04-17 08:30:02.233748] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfd4a0 is same with the state(5) to be set 00:33:29.068 [2024-04-17 08:30:02.233754] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfd4a0 is same with the state(5) to be set 00:33:29.068 [2024-04-17 08:30:02.233760] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfd4a0 is same with the state(5) to be set 00:33:29.068 [2024-04-17 08:30:02.233765] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfd4a0 is same with the state(5) to be set 00:33:29.068 [2024-04-17 08:30:02.233771] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfd4a0 is same with the state(5) to be set 00:33:29.068 [2024-04-17 08:30:02.233776] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfd4a0 is same with the state(5) to be set 00:33:29.068 [2024-04-17 08:30:02.233781] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfd4a0 is same with the state(5) to be set 00:33:29.068 [2024-04-17 08:30:02.233786] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfd4a0 is same with the state(5) to be set 00:33:29.068 [2024-04-17 08:30:02.233792] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfd4a0 is same with the state(5) to be set 00:33:29.068 [2024-04-17 08:30:02.233797] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfd4a0 is same with the state(5) to be set 00:33:29.068 [2024-04-17 08:30:02.233802] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfd4a0 is same with the state(5) to be set 00:33:29.068 [2024-04-17 08:30:02.233807] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfd4a0 is same with the state(5) to be set 00:33:29.068 [2024-04-17 08:30:02.233813] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfd4a0 is same with the state(5) to be set 00:33:29.068 [2024-04-17 08:30:02.233819] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfd4a0 is same with the state(5) to be set 00:33:29.068 [2024-04-17 08:30:02.233824] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfd4a0 is same with the state(5) to be set 00:33:29.068 [2024-04-17 08:30:02.233830] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfd4a0 is same with the state(5) to be set 00:33:29.068 [2024-04-17 08:30:02.233837] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfd4a0 is same with the state(5) to be set 00:33:29.068 [2024-04-17 08:30:02.233843] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfd4a0 is same with the state(5) to be set 00:33:29.068 08:30:02 -- host/failover.sh@45 -- # sleep 3 00:33:32.401 08:30:05 -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:32.401 00:33:32.401 08:30:05 -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:32.660 [2024-04-17 08:30:05.769922] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfdb60 is same with the state(5) to be set 00:33:32.660 [2024-04-17 08:30:05.769970] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfdb60 is same with the state(5) to be set 00:33:32.660 [2024-04-17 08:30:05.769978] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfdb60 is same with the state(5) to be set 00:33:32.660 [2024-04-17 08:30:05.769985] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfdb60 is same with the state(5) to be set 00:33:32.660 [2024-04-17 08:30:05.769990] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfdb60 is same with the state(5) to be set 00:33:32.660 [2024-04-17 08:30:05.769996] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfdb60 is same with the state(5) to be set 00:33:32.660 [2024-04-17 08:30:05.770002] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfdb60 is same with the state(5) to be set 00:33:32.660 [2024-04-17 08:30:05.770008] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfdb60 is same with the state(5) to be set 00:33:32.660 [2024-04-17 08:30:05.770014] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfdb60 is same with the state(5) to be set 00:33:32.660 [2024-04-17 08:30:05.770019] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfdb60 is same with the state(5) to be set 00:33:32.660 [2024-04-17 08:30:05.770025] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfdb60 is same with the state(5) to be set 00:33:32.660 [2024-04-17 08:30:05.770030] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfdb60 is same with the state(5) to be set 00:33:32.660 [2024-04-17 08:30:05.770036] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfdb60 is same with the state(5) to be set 00:33:32.660 [2024-04-17 08:30:05.770041] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfdb60 is same with the state(5) to be set 00:33:32.660 [2024-04-17 08:30:05.770046] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfdb60 is same with the state(5) to be set 00:33:32.660 [2024-04-17 08:30:05.770052] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfdb60 is same with the state(5) to be set 00:33:32.660 [2024-04-17 08:30:05.770058] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfdb60 is same with the state(5) to be set 00:33:32.660 08:30:05 -- host/failover.sh@50 -- # sleep 3 00:33:35.947 08:30:08 -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:35.947 [2024-04-17 08:30:09.006943] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:35.947 08:30:09 -- host/failover.sh@55 -- # sleep 1 00:33:36.885 08:30:10 -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:33:37.145 [2024-04-17 08:30:10.232084] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc190 is same with the state(5) to be set 00:33:37.145 [2024-04-17 08:30:10.232134] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc190 is same with the state(5) to be set 00:33:37.145 [2024-04-17 08:30:10.232141] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc190 is same with the state(5) to be set 00:33:37.145 [2024-04-17 08:30:10.232165] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc190 is same with the state(5) to be set 00:33:37.145 [2024-04-17 08:30:10.232171] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc190 is same with the state(5) to be set 00:33:37.145 [2024-04-17 08:30:10.232176] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc190 is same with the state(5) to be set 00:33:37.145 [2024-04-17 08:30:10.232182] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc190 is same with the state(5) to be set 00:33:37.145 [2024-04-17 08:30:10.232188] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc190 is same with the state(5) to be set 00:33:37.145 [2024-04-17 08:30:10.232194] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc190 is same with the state(5) to be set 00:33:37.145 [2024-04-17 08:30:10.232200] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc190 is same with the state(5) to be set 00:33:37.145 [2024-04-17 08:30:10.232205] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc190 is same with the state(5) to be set 00:33:37.146 [2024-04-17 08:30:10.232212] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc190 is same with the state(5) to be set 00:33:37.146 [2024-04-17 08:30:10.232218] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc190 is same with the state(5) to be set 00:33:37.146 [2024-04-17 08:30:10.232224] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc190 is same with the state(5) to be set 00:33:37.146 [2024-04-17 08:30:10.232230] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc190 is same with the state(5) to be set 00:33:37.146 [2024-04-17 08:30:10.232236] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc190 is same with the state(5) to be set 00:33:37.146 [2024-04-17 08:30:10.232241] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc190 is same with the state(5) to be set 00:33:37.146 [2024-04-17 08:30:10.232247] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc190 is same with the state(5) to be set 00:33:37.146 [2024-04-17 08:30:10.232253] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc190 is same with the state(5) to be set 00:33:37.146 [2024-04-17 08:30:10.232258] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc190 is same with the state(5) to be set 00:33:37.146 [2024-04-17 08:30:10.232264] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc190 is same with the state(5) to be set 00:33:37.146 [2024-04-17 08:30:10.232269] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc190 is same with the state(5) to be set 00:33:37.146 [2024-04-17 08:30:10.232275] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc190 is same with the state(5) to be set 00:33:37.146 [2024-04-17 08:30:10.232280] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc190 is same with the state(5) to be set 00:33:37.146 [2024-04-17 08:30:10.232287] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc190 is same with the state(5) to be set 00:33:37.146 [2024-04-17 08:30:10.232292] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc190 is same with the state(5) to be set 00:33:37.146 [2024-04-17 08:30:10.232298] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc190 is same with the state(5) to be set 00:33:37.146 [2024-04-17 08:30:10.232304] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc190 is same with the state(5) to be set 00:33:37.146 [2024-04-17 08:30:10.232330] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc190 is same with the state(5) to be set 00:33:37.146 [2024-04-17 08:30:10.232337] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc190 is same with the state(5) to be set 00:33:37.146 [2024-04-17 08:30:10.232343] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc190 is same with the state(5) to be set 00:33:37.146 [2024-04-17 08:30:10.232349] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc190 is same with the state(5) to be set 00:33:37.146 [2024-04-17 08:30:10.232355] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc190 is same with the state(5) to be set 00:33:37.146 [2024-04-17 08:30:10.232361] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc190 is same with the state(5) to be set 00:33:37.146 [2024-04-17 08:30:10.232367] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc190 is same with the state(5) to be set 00:33:37.146 [2024-04-17 08:30:10.232373] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc190 is same with the state(5) to be set 00:33:37.146 [2024-04-17 08:30:10.232379] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc190 is same with the state(5) to be set 00:33:37.146 [2024-04-17 08:30:10.232384] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc190 is same with the state(5) to be set 00:33:37.146 [2024-04-17 08:30:10.232390] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc190 is same with the state(5) to be set 00:33:37.146 [2024-04-17 08:30:10.232396] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc190 is same with the state(5) to be set 00:33:37.146 [2024-04-17 08:30:10.232401] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc190 is same with the state(5) to be set 00:33:37.146 [2024-04-17 08:30:10.232407] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc190 is same with the state(5) to be set 00:33:37.146 [2024-04-17 08:30:10.232413] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc190 is same with the state(5) to be set 00:33:37.146 [2024-04-17 08:30:10.232418] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc190 is same with the state(5) to be set 00:33:37.146 [2024-04-17 08:30:10.232424] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc190 is same with the state(5) to be set 00:33:37.146 [2024-04-17 08:30:10.232429] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc190 is same with the state(5) to be set 00:33:37.146 [2024-04-17 08:30:10.232435] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc190 is same with the state(5) to be set 00:33:37.146 [2024-04-17 08:30:10.232441] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc190 is same with the state(5) to be set 00:33:37.146 [2024-04-17 08:30:10.232446] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc190 is same with the state(5) to be set 00:33:37.146 [2024-04-17 08:30:10.232452] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc190 is same with the state(5) to be set 00:33:37.146 [2024-04-17 08:30:10.232457] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc190 is same with the state(5) to be set 00:33:37.146 08:30:10 -- host/failover.sh@59 -- # wait 70093 00:33:43.749 0 00:33:43.749 08:30:16 -- host/failover.sh@61 -- # killprocess 70069 00:33:43.749 08:30:16 -- common/autotest_common.sh@926 -- # '[' -z 70069 ']' 00:33:43.749 08:30:16 -- common/autotest_common.sh@930 -- # kill -0 70069 00:33:43.749 08:30:16 -- common/autotest_common.sh@931 -- # uname 00:33:43.749 08:30:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:43.749 08:30:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 70069 00:33:43.749 08:30:16 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:33:43.749 08:30:16 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:33:43.749 08:30:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 70069' 00:33:43.749 killing process with pid 70069 00:33:43.749 08:30:16 -- common/autotest_common.sh@945 -- # kill 70069 00:33:43.749 08:30:16 -- common/autotest_common.sh@950 -- # wait 70069 00:33:43.749 08:30:16 -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:33:43.749 [2024-04-17 08:29:59.596008] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:33:43.749 [2024-04-17 08:29:59.596126] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70069 ] 00:33:43.749 [2024-04-17 08:29:59.735897] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:43.749 [2024-04-17 08:29:59.841020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:43.749 Running I/O for 15 seconds... 00:33:43.750 [2024-04-17 08:30:02.233909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:100520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.750 [2024-04-17 08:30:02.233960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.750 [2024-04-17 08:30:02.233982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:100544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.750 [2024-04-17 08:30:02.233993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.750 [2024-04-17 08:30:02.234005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:100560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.750 [2024-04-17 08:30:02.234015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.750 [2024-04-17 08:30:02.234027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.750 [2024-04-17 08:30:02.234037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.750 [2024-04-17 08:30:02.234049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:100600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.750 [2024-04-17 08:30:02.234059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.750 [2024-04-17 08:30:02.234071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:100608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.750 [2024-04-17 08:30:02.234080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.750 [2024-04-17 08:30:02.234091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.750 [2024-04-17 08:30:02.234101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.750 [2024-04-17 08:30:02.234112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:100624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.750 [2024-04-17 08:30:02.234121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.750 [2024-04-17 08:30:02.234132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:100640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.750 [2024-04-17 08:30:02.234142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.750 [2024-04-17 08:30:02.234153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:100664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.750 [2024-04-17 08:30:02.234162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.750 [2024-04-17 08:30:02.234174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:99968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.750 [2024-04-17 08:30:02.234183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.750 [2024-04-17 08:30:02.234217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:99976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.750 [2024-04-17 08:30:02.234227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.750 [2024-04-17 08:30:02.234238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:99992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.750 [2024-04-17 08:30:02.234248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.750 [2024-04-17 08:30:02.234259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:100008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.750 [2024-04-17 08:30:02.234268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.750 [2024-04-17 08:30:02.234280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:100016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.750 [2024-04-17 08:30:02.234289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.750 [2024-04-17 08:30:02.234300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:100032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.750 [2024-04-17 08:30:02.234324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.750 [2024-04-17 08:30:02.234335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:100040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.750 [2024-04-17 08:30:02.234348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.750 [2024-04-17 08:30:02.234359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:100064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.750 [2024-04-17 08:30:02.234369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.750 [2024-04-17 08:30:02.234380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:100672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.750 [2024-04-17 08:30:02.234389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.750 [2024-04-17 08:30:02.234401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:100680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.750 [2024-04-17 08:30:02.234410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.750 [2024-04-17 08:30:02.234422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.750 [2024-04-17 08:30:02.234431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.750 [2024-04-17 08:30:02.234443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:100696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.750 [2024-04-17 08:30:02.234453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.750 [2024-04-17 08:30:02.234477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:100704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.750 [2024-04-17 08:30:02.234487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.750 [2024-04-17 08:30:02.234499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:100072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.750 [2024-04-17 08:30:02.234515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.750 [2024-04-17 08:30:02.234526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.750 [2024-04-17 08:30:02.234536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.750 [2024-04-17 08:30:02.234547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.750 [2024-04-17 08:30:02.234556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.750 [2024-04-17 08:30:02.234567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:100112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.750 [2024-04-17 08:30:02.234578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.750 [2024-04-17 08:30:02.234589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:100136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.750 [2024-04-17 08:30:02.234598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.750 [2024-04-17 08:30:02.234609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:100144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.750 [2024-04-17 08:30:02.234619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.750 [2024-04-17 08:30:02.234630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:100160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.750 [2024-04-17 08:30:02.234640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.750 [2024-04-17 08:30:02.234654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:100208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.750 [2024-04-17 08:30:02.234664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.750 [2024-04-17 08:30:02.234676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:100712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.750 [2024-04-17 08:30:02.234686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.750 [2024-04-17 08:30:02.234698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:100720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.750 [2024-04-17 08:30:02.234710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.750 [2024-04-17 08:30:02.234721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:100728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.750 [2024-04-17 08:30:02.234731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.750 [2024-04-17 08:30:02.234744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:100736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.750 [2024-04-17 08:30:02.234754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.750 [2024-04-17 08:30:02.234766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:100744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.750 [2024-04-17 08:30:02.234777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.750 [2024-04-17 08:30:02.234794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:100752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.750 [2024-04-17 08:30:02.234806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.750 [2024-04-17 08:30:02.234819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:100760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.751 [2024-04-17 08:30:02.234835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.751 [2024-04-17 08:30:02.234850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:100768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.751 [2024-04-17 08:30:02.234865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.751 [2024-04-17 08:30:02.234880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:100776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.751 [2024-04-17 08:30:02.234892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.751 [2024-04-17 08:30:02.234903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:100784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.751 [2024-04-17 08:30:02.234914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.751 [2024-04-17 08:30:02.234926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:100792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.751 [2024-04-17 08:30:02.234936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.751 [2024-04-17 08:30:02.234948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:100800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.751 [2024-04-17 08:30:02.234976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.751 [2024-04-17 08:30:02.234988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:100808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.751 [2024-04-17 08:30:02.235000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.751 [2024-04-17 08:30:02.235012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:100816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.751 [2024-04-17 08:30:02.235023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.751 [2024-04-17 08:30:02.235036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:100824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.751 [2024-04-17 08:30:02.235047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.751 [2024-04-17 08:30:02.235060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:100832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.751 [2024-04-17 08:30:02.235071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.751 [2024-04-17 08:30:02.235084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:100840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.751 [2024-04-17 08:30:02.235095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.751 [2024-04-17 08:30:02.235109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:100848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.751 [2024-04-17 08:30:02.235129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.751 [2024-04-17 08:30:02.235142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:100856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.751 [2024-04-17 08:30:02.235154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.751 [2024-04-17 08:30:02.235166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:100864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.751 [2024-04-17 08:30:02.235178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.751 [2024-04-17 08:30:02.235191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:100872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.751 [2024-04-17 08:30:02.235202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.751 [2024-04-17 08:30:02.235215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.751 [2024-04-17 08:30:02.235225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.751 [2024-04-17 08:30:02.235239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:100888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.751 [2024-04-17 08:30:02.235250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.751 [2024-04-17 08:30:02.235262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.751 [2024-04-17 08:30:02.235273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.751 [2024-04-17 08:30:02.235286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:100904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.751 [2024-04-17 08:30:02.235298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.751 [2024-04-17 08:30:02.235312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:100912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.751 [2024-04-17 08:30:02.235332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.751 [2024-04-17 08:30:02.235344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:100216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.751 [2024-04-17 08:30:02.235354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.751 [2024-04-17 08:30:02.235367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:100224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.751 [2024-04-17 08:30:02.235379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.751 [2024-04-17 08:30:02.235392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:100232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.751 [2024-04-17 08:30:02.235402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.751 [2024-04-17 08:30:02.235415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:100264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.751 [2024-04-17 08:30:02.235425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.751 [2024-04-17 08:30:02.235438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:100288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.751 [2024-04-17 08:30:02.235454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.751 [2024-04-17 08:30:02.235466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:100296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.751 [2024-04-17 08:30:02.235477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.751 [2024-04-17 08:30:02.235488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:100304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.751 [2024-04-17 08:30:02.235500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.751 [2024-04-17 08:30:02.235513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:100312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.751 [2024-04-17 08:30:02.235524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.751 [2024-04-17 08:30:02.235536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:100920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.751 [2024-04-17 08:30:02.235547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.751 [2024-04-17 08:30:02.235559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:100928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.751 [2024-04-17 08:30:02.235569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.751 [2024-04-17 08:30:02.235581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:100936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.751 [2024-04-17 08:30:02.235591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.751 [2024-04-17 08:30:02.235603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:100944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.751 [2024-04-17 08:30:02.235614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.751 [2024-04-17 08:30:02.235627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:100952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.751 [2024-04-17 08:30:02.235637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.751 [2024-04-17 08:30:02.235660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:100320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.751 [2024-04-17 08:30:02.235670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.751 [2024-04-17 08:30:02.235681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:100328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.751 [2024-04-17 08:30:02.235691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.751 [2024-04-17 08:30:02.235702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:100352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.751 [2024-04-17 08:30:02.235712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.751 [2024-04-17 08:30:02.235723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:100360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.751 [2024-04-17 08:30:02.235733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.752 [2024-04-17 08:30:02.235750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:100384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.752 [2024-04-17 08:30:02.235761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.752 [2024-04-17 08:30:02.235773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:100392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.752 [2024-04-17 08:30:02.235782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.752 [2024-04-17 08:30:02.235794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:100400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.752 [2024-04-17 08:30:02.235803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.752 [2024-04-17 08:30:02.235815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:100408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.752 [2024-04-17 08:30:02.235825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.752 [2024-04-17 08:30:02.235837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:100960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.752 [2024-04-17 08:30:02.235846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.752 [2024-04-17 08:30:02.235857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:100968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.752 [2024-04-17 08:30:02.235867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.752 [2024-04-17 08:30:02.235878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:100976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.752 [2024-04-17 08:30:02.235890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.752 [2024-04-17 08:30:02.235901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:100984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.752 [2024-04-17 08:30:02.235911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.752 [2024-04-17 08:30:02.235922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:100992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.752 [2024-04-17 08:30:02.235932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.752 [2024-04-17 08:30:02.235944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:101000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.752 [2024-04-17 08:30:02.235954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.752 [2024-04-17 08:30:02.235965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:101008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.752 [2024-04-17 08:30:02.235974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.752 [2024-04-17 08:30:02.235985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:101016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.752 [2024-04-17 08:30:02.235996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.752 [2024-04-17 08:30:02.236009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:101024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.752 [2024-04-17 08:30:02.236024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.752 [2024-04-17 08:30:02.236035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:101032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.752 [2024-04-17 08:30:02.236045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.752 [2024-04-17 08:30:02.236057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:101040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.752 [2024-04-17 08:30:02.236066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.752 [2024-04-17 08:30:02.236077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:101048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.752 [2024-04-17 08:30:02.236086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.752 [2024-04-17 08:30:02.236098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:101056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.752 [2024-04-17 08:30:02.236108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.752 [2024-04-17 08:30:02.236118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:100424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.752 [2024-04-17 08:30:02.236128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.752 [2024-04-17 08:30:02.236140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:100432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.752 [2024-04-17 08:30:02.236152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.752 [2024-04-17 08:30:02.236163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:100440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.752 [2024-04-17 08:30:02.236172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.752 [2024-04-17 08:30:02.236185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:100448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.752 [2024-04-17 08:30:02.236195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.752 [2024-04-17 08:30:02.236206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:100464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.752 [2024-04-17 08:30:02.236217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.752 [2024-04-17 08:30:02.236228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:100504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.752 [2024-04-17 08:30:02.236238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.752 [2024-04-17 08:30:02.236250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:100528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.752 [2024-04-17 08:30:02.236259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.752 [2024-04-17 08:30:02.236270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:100536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.752 [2024-04-17 08:30:02.236280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.752 [2024-04-17 08:30:02.236296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:101064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.752 [2024-04-17 08:30:02.236306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.752 [2024-04-17 08:30:02.236327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:101072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.752 [2024-04-17 08:30:02.236337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.752 [2024-04-17 08:30:02.236348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:101080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.752 [2024-04-17 08:30:02.236358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.752 [2024-04-17 08:30:02.236369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:101088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.752 [2024-04-17 08:30:02.236379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.752 [2024-04-17 08:30:02.236390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:101096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.752 [2024-04-17 08:30:02.236400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.752 [2024-04-17 08:30:02.236411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:101104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.752 [2024-04-17 08:30:02.236420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.752 [2024-04-17 08:30:02.236431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:101112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.752 [2024-04-17 08:30:02.236441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.752 [2024-04-17 08:30:02.236453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:101120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.752 [2024-04-17 08:30:02.236462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.752 [2024-04-17 08:30:02.236474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:101128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.752 [2024-04-17 08:30:02.236483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.752 [2024-04-17 08:30:02.236495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:101136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.752 [2024-04-17 08:30:02.236509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.752 [2024-04-17 08:30:02.236521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:101144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.752 [2024-04-17 08:30:02.236531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.752 [2024-04-17 08:30:02.236542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:101152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.752 [2024-04-17 08:30:02.236552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.753 [2024-04-17 08:30:02.236564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:101160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.753 [2024-04-17 08:30:02.236579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.753 [2024-04-17 08:30:02.236591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:101168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.753 [2024-04-17 08:30:02.236603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.753 [2024-04-17 08:30:02.236615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:101176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.753 [2024-04-17 08:30:02.236624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.753 [2024-04-17 08:30:02.236636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.753 [2024-04-17 08:30:02.236647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.753 [2024-04-17 08:30:02.236657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:101192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.753 [2024-04-17 08:30:02.236667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.753 [2024-04-17 08:30:02.236678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:101200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.753 [2024-04-17 08:30:02.236687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.753 [2024-04-17 08:30:02.236699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:101208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.753 [2024-04-17 08:30:02.236710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.753 [2024-04-17 08:30:02.236721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:101216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.753 [2024-04-17 08:30:02.236730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.753 [2024-04-17 08:30:02.236742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:101224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.753 [2024-04-17 08:30:02.236751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.753 [2024-04-17 08:30:02.236763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:101232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.753 [2024-04-17 08:30:02.236773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.753 [2024-04-17 08:30:02.236784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:100552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.753 [2024-04-17 08:30:02.236793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.753 [2024-04-17 08:30:02.236805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:100568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.753 [2024-04-17 08:30:02.236815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.753 [2024-04-17 08:30:02.236826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.753 [2024-04-17 08:30:02.236836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.753 [2024-04-17 08:30:02.236853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:100592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.753 [2024-04-17 08:30:02.236864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.753 [2024-04-17 08:30:02.236877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:100632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.753 [2024-04-17 08:30:02.236886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.753 [2024-04-17 08:30:02.236898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:100648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.753 [2024-04-17 08:30:02.236908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.753 [2024-04-17 08:30:02.236919] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e0790 is same with the state(5) to be set 00:33:43.753 [2024-04-17 08:30:02.236933] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:43.753 [2024-04-17 08:30:02.236941] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:43.753 [2024-04-17 08:30:02.236952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100656 len:8 PRP1 0x0 PRP2 0x0 00:33:43.753 [2024-04-17 08:30:02.236962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.753 [2024-04-17 08:30:02.237013] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x10e0790 was disconnected and freed. reset controller. 00:33:43.753 [2024-04-17 08:30:02.237027] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:33:43.753 [2024-04-17 08:30:02.237081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:43.753 [2024-04-17 08:30:02.237094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.753 [2024-04-17 08:30:02.237105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:43.753 [2024-04-17 08:30:02.237115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.753 [2024-04-17 08:30:02.237125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:43.753 [2024-04-17 08:30:02.237135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.753 [2024-04-17 08:30:02.237146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:43.753 [2024-04-17 08:30:02.237155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.753 [2024-04-17 08:30:02.237165] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:43.753 [2024-04-17 08:30:02.237206] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x106b160 (9): Bad file descriptor 00:33:43.753 [2024-04-17 08:30:02.239510] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:43.753 [2024-04-17 08:30:02.256361] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:43.753 [2024-04-17 08:30:05.769978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:43.753 [2024-04-17 08:30:05.770027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.753 [2024-04-17 08:30:05.770059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:43.753 [2024-04-17 08:30:05.770070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.753 [2024-04-17 08:30:05.770080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:43.753 [2024-04-17 08:30:05.770089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.753 [2024-04-17 08:30:05.770099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:43.753 [2024-04-17 08:30:05.770108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.753 [2024-04-17 08:30:05.770118] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b160 is same with the state(5) to be set 00:33:43.753 [2024-04-17 08:30:05.770167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:93016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.753 [2024-04-17 08:30:05.770180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.753 [2024-04-17 08:30:05.770197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:92336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.753 [2024-04-17 08:30:05.770207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.753 [2024-04-17 08:30:05.770219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:92344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.753 [2024-04-17 08:30:05.770229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.753 [2024-04-17 08:30:05.770240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:92384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.753 [2024-04-17 08:30:05.770250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.753 [2024-04-17 08:30:05.770261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:92392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.753 [2024-04-17 08:30:05.770271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.753 [2024-04-17 08:30:05.770282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:92400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.753 [2024-04-17 08:30:05.770291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.753 [2024-04-17 08:30:05.770316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:92408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.753 [2024-04-17 08:30:05.770326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.753 [2024-04-17 08:30:05.770337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:92424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.753 [2024-04-17 08:30:05.770347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.753 [2024-04-17 08:30:05.770358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:92448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.753 [2024-04-17 08:30:05.770368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.753 [2024-04-17 08:30:05.770379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:93032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.753 [2024-04-17 08:30:05.770396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.754 [2024-04-17 08:30:05.770408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:93040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.754 [2024-04-17 08:30:05.770417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.754 [2024-04-17 08:30:05.770428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:93056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.754 [2024-04-17 08:30:05.770438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.754 [2024-04-17 08:30:05.770451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:93088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.754 [2024-04-17 08:30:05.770461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.754 [2024-04-17 08:30:05.770481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:93096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.754 [2024-04-17 08:30:05.770491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.754 [2024-04-17 08:30:05.770519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:93104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.754 [2024-04-17 08:30:05.770529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.754 [2024-04-17 08:30:05.770541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:93112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.754 [2024-04-17 08:30:05.770551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.754 [2024-04-17 08:30:05.770563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:93128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.754 [2024-04-17 08:30:05.770573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.754 [2024-04-17 08:30:05.770585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:93136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.754 [2024-04-17 08:30:05.770595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.754 [2024-04-17 08:30:05.770607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:93152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.754 [2024-04-17 08:30:05.770618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.754 [2024-04-17 08:30:05.770629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:92456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.754 [2024-04-17 08:30:05.770639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.754 [2024-04-17 08:30:05.770651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:92504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.754 [2024-04-17 08:30:05.770661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.754 [2024-04-17 08:30:05.770673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:92520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.754 [2024-04-17 08:30:05.770683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.754 [2024-04-17 08:30:05.770701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:92528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.754 [2024-04-17 08:30:05.770711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.754 [2024-04-17 08:30:05.770722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:92560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.754 [2024-04-17 08:30:05.770733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.754 [2024-04-17 08:30:05.770745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:92568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.754 [2024-04-17 08:30:05.770755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.754 [2024-04-17 08:30:05.770767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:92576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.754 [2024-04-17 08:30:05.770777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.754 [2024-04-17 08:30:05.770789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:92600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.754 [2024-04-17 08:30:05.770798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.754 [2024-04-17 08:30:05.770810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:93160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.754 [2024-04-17 08:30:05.770820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.754 [2024-04-17 08:30:05.770833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:93168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.754 [2024-04-17 08:30:05.770843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.754 [2024-04-17 08:30:05.770854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:93176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.754 [2024-04-17 08:30:05.770864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.754 [2024-04-17 08:30:05.770876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:93184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.754 [2024-04-17 08:30:05.770886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.754 [2024-04-17 08:30:05.770898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:93192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.754 [2024-04-17 08:30:05.770908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.754 [2024-04-17 08:30:05.770920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:93200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.754 [2024-04-17 08:30:05.770930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.754 [2024-04-17 08:30:05.770941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:93208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.754 [2024-04-17 08:30:05.770951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.754 [2024-04-17 08:30:05.770963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:93216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.754 [2024-04-17 08:30:05.770978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.754 [2024-04-17 08:30:05.770990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:93224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.754 [2024-04-17 08:30:05.771000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.754 [2024-04-17 08:30:05.771012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:93232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.754 [2024-04-17 08:30:05.771022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.754 [2024-04-17 08:30:05.771034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:93240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.754 [2024-04-17 08:30:05.771045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.754 [2024-04-17 08:30:05.771057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:93248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.755 [2024-04-17 08:30:05.771067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.755 [2024-04-17 08:30:05.771079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:93256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.755 [2024-04-17 08:30:05.771089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.755 [2024-04-17 08:30:05.771101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:93264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.755 [2024-04-17 08:30:05.771111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.755 [2024-04-17 08:30:05.771123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:93272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.755 [2024-04-17 08:30:05.771133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.755 [2024-04-17 08:30:05.771146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:93280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.755 [2024-04-17 08:30:05.771156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.755 [2024-04-17 08:30:05.771168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:93288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.755 [2024-04-17 08:30:05.771178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.755 [2024-04-17 08:30:05.771190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:93296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.755 [2024-04-17 08:30:05.771201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.755 [2024-04-17 08:30:05.771213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:93304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.755 [2024-04-17 08:30:05.771223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.755 [2024-04-17 08:30:05.771236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:93312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.755 [2024-04-17 08:30:05.771247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.755 [2024-04-17 08:30:05.771264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:93320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.755 [2024-04-17 08:30:05.771275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.755 [2024-04-17 08:30:05.771286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:93328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.755 [2024-04-17 08:30:05.771297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.755 [2024-04-17 08:30:05.771309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:92632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.755 [2024-04-17 08:30:05.771328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.755 [2024-04-17 08:30:05.771340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:92640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.755 [2024-04-17 08:30:05.771350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.755 [2024-04-17 08:30:05.771363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:92648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.755 [2024-04-17 08:30:05.771374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.755 [2024-04-17 08:30:05.771386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:92664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.755 [2024-04-17 08:30:05.771397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.755 [2024-04-17 08:30:05.771408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:92672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.755 [2024-04-17 08:30:05.771418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.755 [2024-04-17 08:30:05.771430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:92680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.755 [2024-04-17 08:30:05.771441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.755 [2024-04-17 08:30:05.771453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:92688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.755 [2024-04-17 08:30:05.771464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.755 [2024-04-17 08:30:05.771476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:92696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.755 [2024-04-17 08:30:05.771487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.755 [2024-04-17 08:30:05.771499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:93336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.755 [2024-04-17 08:30:05.771509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.755 [2024-04-17 08:30:05.771522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:93344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.755 [2024-04-17 08:30:05.771532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.755 [2024-04-17 08:30:05.771544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:93352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.755 [2024-04-17 08:30:05.771563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.755 [2024-04-17 08:30:05.771576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:93360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.755 [2024-04-17 08:30:05.771586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.755 [2024-04-17 08:30:05.771598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:93368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.755 [2024-04-17 08:30:05.771608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.755 [2024-04-17 08:30:05.771621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:93376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.755 [2024-04-17 08:30:05.771631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.755 [2024-04-17 08:30:05.771644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:93384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.755 [2024-04-17 08:30:05.771655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.755 [2024-04-17 08:30:05.771666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:93392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.755 [2024-04-17 08:30:05.771677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.755 [2024-04-17 08:30:05.771689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:93400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.755 [2024-04-17 08:30:05.771700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.755 [2024-04-17 08:30:05.771711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:93408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.755 [2024-04-17 08:30:05.771721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.755 [2024-04-17 08:30:05.771744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:93416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.755 [2024-04-17 08:30:05.771754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.755 [2024-04-17 08:30:05.771765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:92720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.755 [2024-04-17 08:30:05.771774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.755 [2024-04-17 08:30:05.771785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:92728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.755 [2024-04-17 08:30:05.771795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.755 [2024-04-17 08:30:05.771807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:92736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.755 [2024-04-17 08:30:05.771817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.755 [2024-04-17 08:30:05.771829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:92744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.755 [2024-04-17 08:30:05.771839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.755 [2024-04-17 08:30:05.771850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:92776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.755 [2024-04-17 08:30:05.771865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.755 [2024-04-17 08:30:05.771878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:92784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.755 [2024-04-17 08:30:05.771888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.755 [2024-04-17 08:30:05.771900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:92792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.755 [2024-04-17 08:30:05.771911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.755 [2024-04-17 08:30:05.771923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:92872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.755 [2024-04-17 08:30:05.771933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.755 [2024-04-17 08:30:05.771944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:93424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.755 [2024-04-17 08:30:05.771953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.755 [2024-04-17 08:30:05.771965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:93432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.755 [2024-04-17 08:30:05.771976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.756 [2024-04-17 08:30:05.771988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:93440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.756 [2024-04-17 08:30:05.771998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.756 [2024-04-17 08:30:05.772009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:93448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.756 [2024-04-17 08:30:05.772020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.756 [2024-04-17 08:30:05.772032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:93456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.756 [2024-04-17 08:30:05.772043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.756 [2024-04-17 08:30:05.772054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:93464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.756 [2024-04-17 08:30:05.772066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.756 [2024-04-17 08:30:05.772077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:93472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.756 [2024-04-17 08:30:05.772088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.756 [2024-04-17 08:30:05.772099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:93480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.756 [2024-04-17 08:30:05.772109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.756 [2024-04-17 08:30:05.772120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:93488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.756 [2024-04-17 08:30:05.772130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.756 [2024-04-17 08:30:05.772146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:93496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.756 [2024-04-17 08:30:05.772156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.756 [2024-04-17 08:30:05.772167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:93504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.756 [2024-04-17 08:30:05.772177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.756 [2024-04-17 08:30:05.772188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:93512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.756 [2024-04-17 08:30:05.772199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.756 [2024-04-17 08:30:05.772210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:93520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.756 [2024-04-17 08:30:05.772220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.756 [2024-04-17 08:30:05.772231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:93528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.756 [2024-04-17 08:30:05.772241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.756 [2024-04-17 08:30:05.772255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:93536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.756 [2024-04-17 08:30:05.772266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.756 [2024-04-17 08:30:05.772277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:93544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.756 [2024-04-17 08:30:05.772286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.756 [2024-04-17 08:30:05.772299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:93552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.756 [2024-04-17 08:30:05.772309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.756 [2024-04-17 08:30:05.772328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:93560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.756 [2024-04-17 08:30:05.772338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.756 [2024-04-17 08:30:05.772350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:93568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.756 [2024-04-17 08:30:05.772359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.756 [2024-04-17 08:30:05.772370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.756 [2024-04-17 08:30:05.772379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.756 [2024-04-17 08:30:05.772391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:93584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.756 [2024-04-17 08:30:05.772400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.756 [2024-04-17 08:30:05.772411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:93592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.756 [2024-04-17 08:30:05.772426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.756 [2024-04-17 08:30:05.772438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:92888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.756 [2024-04-17 08:30:05.772447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.756 [2024-04-17 08:30:05.772458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:92896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.756 [2024-04-17 08:30:05.772468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.756 [2024-04-17 08:30:05.772479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:92920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.756 [2024-04-17 08:30:05.772489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.756 [2024-04-17 08:30:05.772500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:92928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.756 [2024-04-17 08:30:05.772510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.756 [2024-04-17 08:30:05.772521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:92944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.756 [2024-04-17 08:30:05.772530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.756 [2024-04-17 08:30:05.772543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:92976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.756 [2024-04-17 08:30:05.772553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.756 [2024-04-17 08:30:05.772564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:92992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.756 [2024-04-17 08:30:05.772573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.756 [2024-04-17 08:30:05.772585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:93000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.756 [2024-04-17 08:30:05.772594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.756 [2024-04-17 08:30:05.772606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:93600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.756 [2024-04-17 08:30:05.772616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.756 [2024-04-17 08:30:05.772628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:93608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.756 [2024-04-17 08:30:05.772637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.756 [2024-04-17 08:30:05.772648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:93616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.756 [2024-04-17 08:30:05.772658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.756 [2024-04-17 08:30:05.772669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:93624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.756 [2024-04-17 08:30:05.772678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.756 [2024-04-17 08:30:05.772693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:93632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.756 [2024-04-17 08:30:05.772703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.756 [2024-04-17 08:30:05.772714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:93640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.756 [2024-04-17 08:30:05.772723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.756 [2024-04-17 08:30:05.772735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:93648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.756 [2024-04-17 08:30:05.772744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.756 [2024-04-17 08:30:05.772755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:93656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.756 [2024-04-17 08:30:05.772764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.757 [2024-04-17 08:30:05.772776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:93664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.757 [2024-04-17 08:30:05.772786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.757 [2024-04-17 08:30:05.772797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:93672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.757 [2024-04-17 08:30:05.772807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.757 [2024-04-17 08:30:05.772818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:93680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.757 [2024-04-17 08:30:05.772828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.757 [2024-04-17 08:30:05.772839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:93688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.757 [2024-04-17 08:30:05.772849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.757 [2024-04-17 08:30:05.772861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:93696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.757 [2024-04-17 08:30:05.772871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.757 [2024-04-17 08:30:05.772883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:93704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.757 [2024-04-17 08:30:05.772892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.757 [2024-04-17 08:30:05.772903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:93008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.757 [2024-04-17 08:30:05.772913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.757 [2024-04-17 08:30:05.772925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:93024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.757 [2024-04-17 08:30:05.772934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.757 [2024-04-17 08:30:05.772946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:93048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.757 [2024-04-17 08:30:05.772956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.757 [2024-04-17 08:30:05.772972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:93064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.757 [2024-04-17 08:30:05.772981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.757 [2024-04-17 08:30:05.772992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:93072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.757 [2024-04-17 08:30:05.773002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.757 [2024-04-17 08:30:05.773013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:93080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.757 [2024-04-17 08:30:05.773022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.757 [2024-04-17 08:30:05.773033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:93120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.757 [2024-04-17 08:30:05.773042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.757 [2024-04-17 08:30:05.773053] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106e8c0 is same with the state(5) to be set 00:33:43.757 [2024-04-17 08:30:05.773065] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:43.757 [2024-04-17 08:30:05.773072] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:43.757 [2024-04-17 08:30:05.773080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93144 len:8 PRP1 0x0 PRP2 0x0 00:33:43.757 [2024-04-17 08:30:05.773091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.757 [2024-04-17 08:30:05.773136] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x106e8c0 was disconnected and freed. reset controller. 00:33:43.757 [2024-04-17 08:30:05.773149] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:33:43.757 [2024-04-17 08:30:05.773160] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:43.757 [2024-04-17 08:30:05.775262] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:43.757 [2024-04-17 08:30:05.775300] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x106b160 (9): Bad file descriptor 00:33:43.757 [2024-04-17 08:30:05.795396] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:43.757 [2024-04-17 08:30:10.232514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:69600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.757 [2024-04-17 08:30:10.232567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.757 [2024-04-17 08:30:10.232587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:69608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.757 [2024-04-17 08:30:10.232598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.757 [2024-04-17 08:30:10.232611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:69632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.757 [2024-04-17 08:30:10.232621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.757 [2024-04-17 08:30:10.232633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:69640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.757 [2024-04-17 08:30:10.232643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.757 [2024-04-17 08:30:10.232674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:69648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.757 [2024-04-17 08:30:10.232684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.757 [2024-04-17 08:30:10.232696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:69664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.757 [2024-04-17 08:30:10.232706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.757 [2024-04-17 08:30:10.232718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:68968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.757 [2024-04-17 08:30:10.232728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.757 [2024-04-17 08:30:10.232739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:69000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.757 [2024-04-17 08:30:10.232749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.757 [2024-04-17 08:30:10.232760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:69008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.757 [2024-04-17 08:30:10.232770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.757 [2024-04-17 08:30:10.232782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:69024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.757 [2024-04-17 08:30:10.232791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.757 [2024-04-17 08:30:10.232804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:69048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.757 [2024-04-17 08:30:10.232815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.757 [2024-04-17 08:30:10.232826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:69064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.757 [2024-04-17 08:30:10.232836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.757 [2024-04-17 08:30:10.232848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:69072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.757 [2024-04-17 08:30:10.232858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.757 [2024-04-17 08:30:10.232869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:69080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.757 [2024-04-17 08:30:10.232879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.757 [2024-04-17 08:30:10.232891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:69672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.757 [2024-04-17 08:30:10.232901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.757 [2024-04-17 08:30:10.232912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:69680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.757 [2024-04-17 08:30:10.232922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.757 [2024-04-17 08:30:10.232934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:69688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.757 [2024-04-17 08:30:10.232952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.757 [2024-04-17 08:30:10.232965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:69696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.757 [2024-04-17 08:30:10.232975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.757 [2024-04-17 08:30:10.232988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:69704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.757 [2024-04-17 08:30:10.232999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.757 [2024-04-17 08:30:10.233010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:69720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.757 [2024-04-17 08:30:10.233020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.757 [2024-04-17 08:30:10.233032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:69728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.757 [2024-04-17 08:30:10.233042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.757 [2024-04-17 08:30:10.233054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:69736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.757 [2024-04-17 08:30:10.233064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.757 [2024-04-17 08:30:10.233076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:69744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.758 [2024-04-17 08:30:10.233086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.758 [2024-04-17 08:30:10.233098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:69752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.758 [2024-04-17 08:30:10.233107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.758 [2024-04-17 08:30:10.233119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:69760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.758 [2024-04-17 08:30:10.233129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.758 [2024-04-17 08:30:10.233140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:69768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.758 [2024-04-17 08:30:10.233150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.758 [2024-04-17 08:30:10.233162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:69776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.758 [2024-04-17 08:30:10.233173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.758 [2024-04-17 08:30:10.233185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:69784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.758 [2024-04-17 08:30:10.233194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.758 [2024-04-17 08:30:10.233206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:69792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.758 [2024-04-17 08:30:10.233216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.758 [2024-04-17 08:30:10.233234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:69096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.758 [2024-04-17 08:30:10.233244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.758 [2024-04-17 08:30:10.233255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:69104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.758 [2024-04-17 08:30:10.233266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.758 [2024-04-17 08:30:10.233278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:69112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.758 [2024-04-17 08:30:10.233287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.758 [2024-04-17 08:30:10.233299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:69128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.758 [2024-04-17 08:30:10.233324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.758 [2024-04-17 08:30:10.233336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:69136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.758 [2024-04-17 08:30:10.233346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.758 [2024-04-17 08:30:10.233358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:69152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.758 [2024-04-17 08:30:10.233368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.758 [2024-04-17 08:30:10.233380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:69160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.758 [2024-04-17 08:30:10.233391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.758 [2024-04-17 08:30:10.233403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:69176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.758 [2024-04-17 08:30:10.233412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.758 [2024-04-17 08:30:10.233424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:69800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.758 [2024-04-17 08:30:10.233434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.758 [2024-04-17 08:30:10.233446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:69808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.758 [2024-04-17 08:30:10.233456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.758 [2024-04-17 08:30:10.233468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:69816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.758 [2024-04-17 08:30:10.233478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.758 [2024-04-17 08:30:10.233490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:69184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.758 [2024-04-17 08:30:10.233500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.758 [2024-04-17 08:30:10.233511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:69192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.758 [2024-04-17 08:30:10.233572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.758 [2024-04-17 08:30:10.233595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:69208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.758 [2024-04-17 08:30:10.233605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.758 [2024-04-17 08:30:10.233616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:69224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.758 [2024-04-17 08:30:10.233625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.758 [2024-04-17 08:30:10.233636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:69296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.758 [2024-04-17 08:30:10.233646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.758 [2024-04-17 08:30:10.233656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:69328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.758 [2024-04-17 08:30:10.233666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.758 [2024-04-17 08:30:10.233678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:69344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.758 [2024-04-17 08:30:10.233687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.758 [2024-04-17 08:30:10.233698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:69360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.758 [2024-04-17 08:30:10.233707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.758 [2024-04-17 08:30:10.233718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:69824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.758 [2024-04-17 08:30:10.233729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.758 [2024-04-17 08:30:10.233739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:69832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.758 [2024-04-17 08:30:10.233749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.758 [2024-04-17 08:30:10.233760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:69840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.758 [2024-04-17 08:30:10.233769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.758 [2024-04-17 08:30:10.233780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:69848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.758 [2024-04-17 08:30:10.233790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.758 [2024-04-17 08:30:10.233801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:69856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.758 [2024-04-17 08:30:10.233810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.758 [2024-04-17 08:30:10.233820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:69864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.758 [2024-04-17 08:30:10.233830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.758 [2024-04-17 08:30:10.233845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:69872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.758 [2024-04-17 08:30:10.233855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.758 [2024-04-17 08:30:10.233866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:69880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.758 [2024-04-17 08:30:10.233875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.758 [2024-04-17 08:30:10.233886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:69888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.758 [2024-04-17 08:30:10.233895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.758 [2024-04-17 08:30:10.233906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:69896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.758 [2024-04-17 08:30:10.233915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.758 [2024-04-17 08:30:10.233927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:69904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.758 [2024-04-17 08:30:10.233936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.758 [2024-04-17 08:30:10.233947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:69912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.758 [2024-04-17 08:30:10.233956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.758 [2024-04-17 08:30:10.233967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:69920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.758 [2024-04-17 08:30:10.233976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.758 [2024-04-17 08:30:10.233987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:69928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.758 [2024-04-17 08:30:10.233996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.759 [2024-04-17 08:30:10.234007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:69936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.759 [2024-04-17 08:30:10.234016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.759 [2024-04-17 08:30:10.234027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:69944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.759 [2024-04-17 08:30:10.234036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.759 [2024-04-17 08:30:10.234047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:69952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.759 [2024-04-17 08:30:10.234058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.759 [2024-04-17 08:30:10.234069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:69960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.759 [2024-04-17 08:30:10.234079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.759 [2024-04-17 08:30:10.234089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:69968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.759 [2024-04-17 08:30:10.234099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.759 [2024-04-17 08:30:10.234114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:69976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.759 [2024-04-17 08:30:10.234124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.759 [2024-04-17 08:30:10.234134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:69984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.759 [2024-04-17 08:30:10.234144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.759 [2024-04-17 08:30:10.234155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:69992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.759 [2024-04-17 08:30:10.234164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.759 [2024-04-17 08:30:10.234175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:70000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.759 [2024-04-17 08:30:10.234184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.759 [2024-04-17 08:30:10.234195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:70008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.759 [2024-04-17 08:30:10.234204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.759 [2024-04-17 08:30:10.234215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:70016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.759 [2024-04-17 08:30:10.234224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.759 [2024-04-17 08:30:10.234235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:70024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.759 [2024-04-17 08:30:10.234245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.759 [2024-04-17 08:30:10.234257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:70032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.759 [2024-04-17 08:30:10.234266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.759 [2024-04-17 08:30:10.234277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:70040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.759 [2024-04-17 08:30:10.234287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.759 [2024-04-17 08:30:10.234298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:69368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.759 [2024-04-17 08:30:10.234308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.759 [2024-04-17 08:30:10.234327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:69384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.759 [2024-04-17 08:30:10.234336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.759 [2024-04-17 08:30:10.234347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:69392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.759 [2024-04-17 08:30:10.234356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.759 [2024-04-17 08:30:10.234367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:69400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.759 [2024-04-17 08:30:10.234381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.759 [2024-04-17 08:30:10.234392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:69416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.759 [2024-04-17 08:30:10.234403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.759 [2024-04-17 08:30:10.234414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:69432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.759 [2024-04-17 08:30:10.234423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.759 [2024-04-17 08:30:10.234434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:69440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.759 [2024-04-17 08:30:10.234444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.759 [2024-04-17 08:30:10.234455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:69464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.759 [2024-04-17 08:30:10.234464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.759 [2024-04-17 08:30:10.234475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:70048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.759 [2024-04-17 08:30:10.234492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.759 [2024-04-17 08:30:10.234519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:70056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.759 [2024-04-17 08:30:10.234529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.759 [2024-04-17 08:30:10.234541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:70064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.759 [2024-04-17 08:30:10.234551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.759 [2024-04-17 08:30:10.234563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:70072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.759 [2024-04-17 08:30:10.234573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.759 [2024-04-17 08:30:10.234584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:70080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.759 [2024-04-17 08:30:10.234594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.759 [2024-04-17 08:30:10.234606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:70088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.759 [2024-04-17 08:30:10.234615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.759 [2024-04-17 08:30:10.234627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:70096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.759 [2024-04-17 08:30:10.234637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.759 [2024-04-17 08:30:10.234649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:70104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.759 [2024-04-17 08:30:10.234659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.759 [2024-04-17 08:30:10.234676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:70112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.759 [2024-04-17 08:30:10.234686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.759 [2024-04-17 08:30:10.234698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:70120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.759 [2024-04-17 08:30:10.234708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.759 [2024-04-17 08:30:10.234719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:70128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.759 [2024-04-17 08:30:10.234729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.759 [2024-04-17 08:30:10.234741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:70136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.760 [2024-04-17 08:30:10.234751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.760 [2024-04-17 08:30:10.234763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:70144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.760 [2024-04-17 08:30:10.234774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.760 [2024-04-17 08:30:10.234786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:70152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.760 [2024-04-17 08:30:10.234796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.760 [2024-04-17 08:30:10.234808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:69472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.760 [2024-04-17 08:30:10.234818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.760 [2024-04-17 08:30:10.234830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:69488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.760 [2024-04-17 08:30:10.234840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.760 [2024-04-17 08:30:10.234851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:69496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.760 [2024-04-17 08:30:10.234861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.760 [2024-04-17 08:30:10.234873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:69512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.760 [2024-04-17 08:30:10.234883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.760 [2024-04-17 08:30:10.234894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:69544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.760 [2024-04-17 08:30:10.234905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.760 [2024-04-17 08:30:10.234916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:69560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.760 [2024-04-17 08:30:10.234926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.760 [2024-04-17 08:30:10.234938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:69616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.760 [2024-04-17 08:30:10.234952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.760 [2024-04-17 08:30:10.234964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:69624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.760 [2024-04-17 08:30:10.234975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.760 [2024-04-17 08:30:10.234986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:70160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.760 [2024-04-17 08:30:10.234996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.760 [2024-04-17 08:30:10.235008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:70168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.760 [2024-04-17 08:30:10.235018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.760 [2024-04-17 08:30:10.235030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:70176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.760 [2024-04-17 08:30:10.235040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.760 [2024-04-17 08:30:10.235052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:70184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.760 [2024-04-17 08:30:10.235062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.760 [2024-04-17 08:30:10.235074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:70192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.760 [2024-04-17 08:30:10.235084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.760 [2024-04-17 08:30:10.235096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:70200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.760 [2024-04-17 08:30:10.235106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.760 [2024-04-17 08:30:10.235118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:70208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.760 [2024-04-17 08:30:10.235130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.760 [2024-04-17 08:30:10.235142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:70216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.760 [2024-04-17 08:30:10.235152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.760 [2024-04-17 08:30:10.235164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:70224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.760 [2024-04-17 08:30:10.235174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.760 [2024-04-17 08:30:10.235186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:70232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.760 [2024-04-17 08:30:10.235196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.760 [2024-04-17 08:30:10.235207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:70240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.760 [2024-04-17 08:30:10.235218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.760 [2024-04-17 08:30:10.235229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:70248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.760 [2024-04-17 08:30:10.235247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.760 [2024-04-17 08:30:10.235259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:70256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.760 [2024-04-17 08:30:10.235269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.760 [2024-04-17 08:30:10.235280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:70264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.760 [2024-04-17 08:30:10.235290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.760 [2024-04-17 08:30:10.235302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:70272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.760 [2024-04-17 08:30:10.235312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.760 [2024-04-17 08:30:10.235332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:70280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.760 [2024-04-17 08:30:10.235342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.760 [2024-04-17 08:30:10.235354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:70288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.760 [2024-04-17 08:30:10.235364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.760 [2024-04-17 08:30:10.235376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:70296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.760 [2024-04-17 08:30:10.235386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.760 [2024-04-17 08:30:10.235398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:70304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.760 [2024-04-17 08:30:10.235408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.760 [2024-04-17 08:30:10.235420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:70312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.760 [2024-04-17 08:30:10.235430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.760 [2024-04-17 08:30:10.235442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:69656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.760 [2024-04-17 08:30:10.235452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.760 [2024-04-17 08:30:10.235463] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127e2f0 is same with the state(5) to be set 00:33:43.760 [2024-04-17 08:30:10.235475] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:43.760 [2024-04-17 08:30:10.235483] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:43.760 [2024-04-17 08:30:10.235493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:69712 len:8 PRP1 0x0 PRP2 0x0 00:33:43.760 [2024-04-17 08:30:10.235503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.760 [2024-04-17 08:30:10.235554] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x127e2f0 was disconnected and freed. reset controller. 00:33:43.760 [2024-04-17 08:30:10.235567] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:33:43.760 [2024-04-17 08:30:10.235624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:43.760 [2024-04-17 08:30:10.235637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.760 [2024-04-17 08:30:10.235649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:43.760 [2024-04-17 08:30:10.235658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.760 [2024-04-17 08:30:10.235669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:43.760 [2024-04-17 08:30:10.235684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.760 [2024-04-17 08:30:10.235698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:43.760 [2024-04-17 08:30:10.235708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.760 [2024-04-17 08:30:10.235719] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:43.760 [2024-04-17 08:30:10.238001] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:43.761 [2024-04-17 08:30:10.238036] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x106b160 (9): Bad file descriptor 00:33:43.761 [2024-04-17 08:30:10.255079] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:43.761 00:33:43.761 Latency(us) 00:33:43.761 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:43.761 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:43.761 Verification LBA range: start 0x0 length 0x4000 00:33:43.761 NVMe0n1 : 15.01 13564.25 52.99 201.83 0.00 9281.75 384.56 22322.31 00:33:43.761 =================================================================================================================== 00:33:43.761 Total : 13564.25 52.99 201.83 0.00 9281.75 384.56 22322.31 00:33:43.761 Received shutdown signal, test time was about 15.000000 seconds 00:33:43.761 00:33:43.761 Latency(us) 00:33:43.761 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:43.761 =================================================================================================================== 00:33:43.761 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:43.761 08:30:16 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:33:43.761 08:30:16 -- host/failover.sh@65 -- # count=3 00:33:43.761 08:30:16 -- host/failover.sh@67 -- # (( count != 3 )) 00:33:43.761 08:30:16 -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:33:43.761 08:30:16 -- host/failover.sh@73 -- # bdevperf_pid=70270 00:33:43.761 08:30:16 -- host/failover.sh@75 -- # waitforlisten 70270 /var/tmp/bdevperf.sock 00:33:43.761 08:30:16 -- common/autotest_common.sh@819 -- # '[' -z 70270 ']' 00:33:43.761 08:30:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:43.761 08:30:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:43.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:43.761 08:30:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:43.761 08:30:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:43.761 08:30:16 -- common/autotest_common.sh@10 -- # set +x 00:33:44.329 08:30:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:44.329 08:30:17 -- common/autotest_common.sh@852 -- # return 0 00:33:44.329 08:30:17 -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:44.329 [2024-04-17 08:30:17.531270] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:44.329 08:30:17 -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:33:44.588 [2024-04-17 08:30:17.743106] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:33:44.588 08:30:17 -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:44.847 NVMe0n1 00:33:44.847 08:30:18 -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:45.105 00:33:45.105 08:30:18 -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:45.365 00:33:45.365 08:30:18 -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:45.365 08:30:18 -- host/failover.sh@82 -- # grep -q NVMe0 00:33:45.624 08:30:18 -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:45.883 08:30:19 -- host/failover.sh@87 -- # sleep 3 00:33:49.185 08:30:22 -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:49.185 08:30:22 -- host/failover.sh@88 -- # grep -q NVMe0 00:33:49.185 08:30:22 -- host/failover.sh@90 -- # run_test_pid=70347 00:33:49.185 08:30:22 -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:49.185 08:30:22 -- host/failover.sh@92 -- # wait 70347 00:33:50.118 0 00:33:50.118 08:30:23 -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:33:50.118 [2024-04-17 08:30:16.442734] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:33:50.118 [2024-04-17 08:30:16.442900] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70270 ] 00:33:50.118 [2024-04-17 08:30:16.588985] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:50.118 [2024-04-17 08:30:16.686754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:50.118 [2024-04-17 08:30:19.030893] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:33:50.118 [2024-04-17 08:30:19.031497] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:50.118 [2024-04-17 08:30:19.031581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.118 [2024-04-17 08:30:19.031635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:50.118 [2024-04-17 08:30:19.031681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.118 [2024-04-17 08:30:19.031728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:50.118 [2024-04-17 08:30:19.031773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.118 [2024-04-17 08:30:19.031813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:50.118 [2024-04-17 08:30:19.031862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.118 [2024-04-17 08:30:19.031903] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:50.118 [2024-04-17 08:30:19.031997] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:50.118 [2024-04-17 08:30:19.032063] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19ad160 (9): Bad file descriptor 00:33:50.118 [2024-04-17 08:30:19.036559] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:50.118 Running I/O for 1 seconds... 00:33:50.118 00:33:50.118 Latency(us) 00:33:50.118 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:50.118 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:50.118 Verification LBA range: start 0x0 length 0x4000 00:33:50.118 NVMe0n1 : 1.01 13819.18 53.98 0.00 0.00 9217.00 922.94 14881.54 00:33:50.118 =================================================================================================================== 00:33:50.118 Total : 13819.18 53.98 0.00 0.00 9217.00 922.94 14881.54 00:33:50.118 08:30:23 -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:50.118 08:30:23 -- host/failover.sh@95 -- # grep -q NVMe0 00:33:50.377 08:30:23 -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:50.665 08:30:23 -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:50.665 08:30:23 -- host/failover.sh@99 -- # grep -q NVMe0 00:33:50.950 08:30:24 -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:51.209 08:30:24 -- host/failover.sh@101 -- # sleep 3 00:33:54.521 08:30:27 -- host/failover.sh@103 -- # grep -q NVMe0 00:33:54.521 08:30:27 -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:54.521 08:30:27 -- host/failover.sh@108 -- # killprocess 70270 00:33:54.521 08:30:27 -- common/autotest_common.sh@926 -- # '[' -z 70270 ']' 00:33:54.521 08:30:27 -- common/autotest_common.sh@930 -- # kill -0 70270 00:33:54.521 08:30:27 -- common/autotest_common.sh@931 -- # uname 00:33:54.521 08:30:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:54.521 08:30:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 70270 00:33:54.521 08:30:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:33:54.521 08:30:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:33:54.521 08:30:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 70270' 00:33:54.521 killing process with pid 70270 00:33:54.521 08:30:27 -- common/autotest_common.sh@945 -- # kill 70270 00:33:54.521 08:30:27 -- common/autotest_common.sh@950 -- # wait 70270 00:33:54.521 08:30:27 -- host/failover.sh@110 -- # sync 00:33:54.521 08:30:27 -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:54.780 08:30:28 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:33:54.781 08:30:28 -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:33:54.781 08:30:28 -- host/failover.sh@116 -- # nvmftestfini 00:33:54.781 08:30:28 -- nvmf/common.sh@476 -- # nvmfcleanup 00:33:54.781 08:30:28 -- nvmf/common.sh@116 -- # sync 00:33:54.781 08:30:28 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:33:54.781 08:30:28 -- nvmf/common.sh@119 -- # set +e 00:33:54.781 08:30:28 -- nvmf/common.sh@120 -- # for i in {1..20} 00:33:54.781 08:30:28 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:33:54.781 rmmod nvme_tcp 00:33:54.781 rmmod nvme_fabrics 00:33:54.781 rmmod nvme_keyring 00:33:55.040 08:30:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:33:55.040 08:30:28 -- nvmf/common.sh@123 -- # set -e 00:33:55.040 08:30:28 -- nvmf/common.sh@124 -- # return 0 00:33:55.040 08:30:28 -- nvmf/common.sh@477 -- # '[' -n 70015 ']' 00:33:55.040 08:30:28 -- nvmf/common.sh@478 -- # killprocess 70015 00:33:55.040 08:30:28 -- common/autotest_common.sh@926 -- # '[' -z 70015 ']' 00:33:55.040 08:30:28 -- common/autotest_common.sh@930 -- # kill -0 70015 00:33:55.040 08:30:28 -- common/autotest_common.sh@931 -- # uname 00:33:55.040 08:30:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:55.040 08:30:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 70015 00:33:55.041 killing process with pid 70015 00:33:55.041 08:30:28 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:33:55.041 08:30:28 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:33:55.041 08:30:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 70015' 00:33:55.041 08:30:28 -- common/autotest_common.sh@945 -- # kill 70015 00:33:55.041 08:30:28 -- common/autotest_common.sh@950 -- # wait 70015 00:33:55.301 08:30:28 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:33:55.301 08:30:28 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:33:55.301 08:30:28 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:33:55.301 08:30:28 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:55.301 08:30:28 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:33:55.301 08:30:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:55.301 08:30:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:55.301 08:30:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:55.301 08:30:28 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:33:55.301 00:33:55.301 real 0m31.815s 00:33:55.301 user 2m2.800s 00:33:55.301 sys 0m4.835s 00:33:55.301 08:30:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:55.301 08:30:28 -- common/autotest_common.sh@10 -- # set +x 00:33:55.301 ************************************ 00:33:55.301 END TEST nvmf_failover 00:33:55.301 ************************************ 00:33:55.301 08:30:28 -- nvmf/nvmf.sh@100 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:33:55.301 08:30:28 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:33:55.301 08:30:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:55.301 08:30:28 -- common/autotest_common.sh@10 -- # set +x 00:33:55.301 ************************************ 00:33:55.301 START TEST nvmf_discovery 00:33:55.301 ************************************ 00:33:55.301 08:30:28 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:33:55.301 * Looking for test storage... 00:33:55.301 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:33:55.301 08:30:28 -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:33:55.301 08:30:28 -- nvmf/common.sh@7 -- # uname -s 00:33:55.301 08:30:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:55.301 08:30:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:55.301 08:30:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:55.301 08:30:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:55.301 08:30:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:55.301 08:30:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:55.301 08:30:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:55.301 08:30:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:55.301 08:30:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:55.301 08:30:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:55.301 08:30:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ce38300-f67f-48af-81f9-d51a7c54746d 00:33:55.301 08:30:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ce38300-f67f-48af-81f9-d51a7c54746d 00:33:55.301 08:30:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:55.301 08:30:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:55.301 08:30:28 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:33:55.301 08:30:28 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:55.301 08:30:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:55.301 08:30:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:55.301 08:30:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:55.301 08:30:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:55.301 08:30:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:55.301 08:30:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:55.301 08:30:28 -- paths/export.sh@5 -- # export PATH 00:33:55.301 08:30:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:55.301 08:30:28 -- nvmf/common.sh@46 -- # : 0 00:33:55.301 08:30:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:33:55.301 08:30:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:33:55.301 08:30:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:33:55.301 08:30:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:55.301 08:30:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:55.301 08:30:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:33:55.301 08:30:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:33:55.301 08:30:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:33:55.301 08:30:28 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:33:55.301 08:30:28 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:33:55.301 08:30:28 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:33:55.301 08:30:28 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:33:55.301 08:30:28 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:33:55.301 08:30:28 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:33:55.301 08:30:28 -- host/discovery.sh@25 -- # nvmftestinit 00:33:55.301 08:30:28 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:33:55.301 08:30:28 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:55.301 08:30:28 -- nvmf/common.sh@436 -- # prepare_net_devs 00:33:55.301 08:30:28 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:33:55.301 08:30:28 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:33:55.301 08:30:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:55.301 08:30:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:55.301 08:30:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:55.561 08:30:28 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:33:55.561 08:30:28 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:33:55.561 08:30:28 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:33:55.562 08:30:28 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:33:55.562 08:30:28 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:33:55.562 08:30:28 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:33:55.562 08:30:28 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:55.562 08:30:28 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:55.562 08:30:28 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:33:55.562 08:30:28 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:33:55.562 08:30:28 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:33:55.562 08:30:28 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:33:55.562 08:30:28 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:33:55.562 08:30:28 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:55.562 08:30:28 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:33:55.562 08:30:28 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:33:55.562 08:30:28 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:33:55.562 08:30:28 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:33:55.562 08:30:28 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:33:55.562 08:30:28 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:33:55.562 Cannot find device "nvmf_tgt_br" 00:33:55.562 08:30:28 -- nvmf/common.sh@154 -- # true 00:33:55.562 08:30:28 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:33:55.562 Cannot find device "nvmf_tgt_br2" 00:33:55.562 08:30:28 -- nvmf/common.sh@155 -- # true 00:33:55.562 08:30:28 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:33:55.562 08:30:28 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:33:55.562 Cannot find device "nvmf_tgt_br" 00:33:55.562 08:30:28 -- nvmf/common.sh@157 -- # true 00:33:55.562 08:30:28 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:33:55.562 Cannot find device "nvmf_tgt_br2" 00:33:55.562 08:30:28 -- nvmf/common.sh@158 -- # true 00:33:55.562 08:30:28 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:33:55.562 08:30:28 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:33:55.562 08:30:28 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:33:55.562 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:55.562 08:30:28 -- nvmf/common.sh@161 -- # true 00:33:55.562 08:30:28 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:33:55.562 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:55.562 08:30:28 -- nvmf/common.sh@162 -- # true 00:33:55.562 08:30:28 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:33:55.562 08:30:28 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:33:55.562 08:30:28 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:33:55.562 08:30:28 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:33:55.562 08:30:28 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:33:55.562 08:30:28 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:33:55.562 08:30:28 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:33:55.562 08:30:28 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:33:55.562 08:30:28 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:33:55.562 08:30:28 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:33:55.562 08:30:28 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:33:55.562 08:30:28 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:33:55.822 08:30:28 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:33:55.822 08:30:28 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:33:55.822 08:30:28 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:33:55.822 08:30:28 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:33:55.822 08:30:28 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:33:55.822 08:30:28 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:33:55.822 08:30:28 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:33:55.822 08:30:28 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:33:55.822 08:30:28 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:33:55.822 08:30:28 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:33:55.822 08:30:28 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:33:55.822 08:30:28 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:33:55.822 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:55.822 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:33:55.822 00:33:55.822 --- 10.0.0.2 ping statistics --- 00:33:55.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:55.822 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:33:55.822 08:30:28 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:33:55.822 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:33:55.822 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:33:55.822 00:33:55.822 --- 10.0.0.3 ping statistics --- 00:33:55.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:55.822 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:33:55.822 08:30:28 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:33:55.822 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:55.822 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:33:55.822 00:33:55.822 --- 10.0.0.1 ping statistics --- 00:33:55.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:55.822 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:33:55.822 08:30:28 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:55.822 08:30:28 -- nvmf/common.sh@421 -- # return 0 00:33:55.822 08:30:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:33:55.822 08:30:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:55.822 08:30:28 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:33:55.822 08:30:28 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:33:55.822 08:30:28 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:55.822 08:30:28 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:33:55.822 08:30:28 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:33:55.822 08:30:28 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:33:55.822 08:30:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:33:55.822 08:30:28 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:55.822 08:30:28 -- common/autotest_common.sh@10 -- # set +x 00:33:55.822 08:30:29 -- nvmf/common.sh@469 -- # nvmfpid=70615 00:33:55.822 08:30:29 -- nvmf/common.sh@470 -- # waitforlisten 70615 00:33:55.822 08:30:29 -- common/autotest_common.sh@819 -- # '[' -z 70615 ']' 00:33:55.822 08:30:29 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:33:55.822 08:30:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:55.822 08:30:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:55.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:55.822 08:30:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:55.822 08:30:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:55.822 08:30:29 -- common/autotest_common.sh@10 -- # set +x 00:33:55.822 [2024-04-17 08:30:29.050880] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:33:55.822 [2024-04-17 08:30:29.050954] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:56.081 [2024-04-17 08:30:29.189440] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:56.081 [2024-04-17 08:30:29.293686] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:33:56.081 [2024-04-17 08:30:29.293823] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:56.081 [2024-04-17 08:30:29.293832] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:56.081 [2024-04-17 08:30:29.293838] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:56.081 [2024-04-17 08:30:29.293864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:57.016 08:30:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:57.016 08:30:30 -- common/autotest_common.sh@852 -- # return 0 00:33:57.016 08:30:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:33:57.016 08:30:30 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:57.016 08:30:30 -- common/autotest_common.sh@10 -- # set +x 00:33:57.016 08:30:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:57.016 08:30:30 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:57.016 08:30:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:57.016 08:30:30 -- common/autotest_common.sh@10 -- # set +x 00:33:57.016 [2024-04-17 08:30:30.044409] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:57.016 08:30:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:57.016 08:30:30 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:33:57.016 08:30:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:57.016 08:30:30 -- common/autotest_common.sh@10 -- # set +x 00:33:57.016 [2024-04-17 08:30:30.056539] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:33:57.016 08:30:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:57.016 08:30:30 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:33:57.016 08:30:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:57.016 08:30:30 -- common/autotest_common.sh@10 -- # set +x 00:33:57.016 null0 00:33:57.016 08:30:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:57.016 08:30:30 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:33:57.016 08:30:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:57.016 08:30:30 -- common/autotest_common.sh@10 -- # set +x 00:33:57.016 null1 00:33:57.016 08:30:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:57.016 08:30:30 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:33:57.016 08:30:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:57.016 08:30:30 -- common/autotest_common.sh@10 -- # set +x 00:33:57.016 08:30:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:57.016 08:30:30 -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:33:57.016 08:30:30 -- host/discovery.sh@45 -- # hostpid=70647 00:33:57.016 08:30:30 -- host/discovery.sh@46 -- # waitforlisten 70647 /tmp/host.sock 00:33:57.016 08:30:30 -- common/autotest_common.sh@819 -- # '[' -z 70647 ']' 00:33:57.016 08:30:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:33:57.016 08:30:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:57.016 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:33:57.016 08:30:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:33:57.016 08:30:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:57.016 08:30:30 -- common/autotest_common.sh@10 -- # set +x 00:33:57.016 [2024-04-17 08:30:30.148937] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:33:57.016 [2024-04-17 08:30:30.149042] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70647 ] 00:33:57.016 [2024-04-17 08:30:30.278631] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:57.275 [2024-04-17 08:30:30.407931] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:33:57.275 [2024-04-17 08:30:30.408110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:57.842 08:30:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:57.842 08:30:31 -- common/autotest_common.sh@852 -- # return 0 00:33:57.842 08:30:31 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:57.842 08:30:31 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:33:57.842 08:30:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:57.842 08:30:31 -- common/autotest_common.sh@10 -- # set +x 00:33:57.842 08:30:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:57.842 08:30:31 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:33:57.842 08:30:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:57.842 08:30:31 -- common/autotest_common.sh@10 -- # set +x 00:33:57.842 08:30:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:57.842 08:30:31 -- host/discovery.sh@72 -- # notify_id=0 00:33:57.842 08:30:31 -- host/discovery.sh@78 -- # get_subsystem_names 00:33:57.842 08:30:31 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:57.842 08:30:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:57.842 08:30:31 -- common/autotest_common.sh@10 -- # set +x 00:33:57.842 08:30:31 -- host/discovery.sh@59 -- # sort 00:33:57.842 08:30:31 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:57.842 08:30:31 -- host/discovery.sh@59 -- # xargs 00:33:57.842 08:30:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:57.842 08:30:31 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:33:57.842 08:30:31 -- host/discovery.sh@79 -- # get_bdev_list 00:33:57.842 08:30:31 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:57.842 08:30:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:57.842 08:30:31 -- common/autotest_common.sh@10 -- # set +x 00:33:57.842 08:30:31 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:57.842 08:30:31 -- host/discovery.sh@55 -- # sort 00:33:57.842 08:30:31 -- host/discovery.sh@55 -- # xargs 00:33:57.842 08:30:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:57.842 08:30:31 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:33:57.842 08:30:31 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:33:57.842 08:30:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:57.842 08:30:31 -- common/autotest_common.sh@10 -- # set +x 00:33:58.101 08:30:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:58.101 08:30:31 -- host/discovery.sh@82 -- # get_subsystem_names 00:33:58.101 08:30:31 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:58.101 08:30:31 -- host/discovery.sh@59 -- # sort 00:33:58.101 08:30:31 -- host/discovery.sh@59 -- # xargs 00:33:58.101 08:30:31 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:58.101 08:30:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:58.101 08:30:31 -- common/autotest_common.sh@10 -- # set +x 00:33:58.101 08:30:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:58.101 08:30:31 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:33:58.101 08:30:31 -- host/discovery.sh@83 -- # get_bdev_list 00:33:58.101 08:30:31 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:58.101 08:30:31 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:58.101 08:30:31 -- host/discovery.sh@55 -- # xargs 00:33:58.101 08:30:31 -- host/discovery.sh@55 -- # sort 00:33:58.101 08:30:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:58.101 08:30:31 -- common/autotest_common.sh@10 -- # set +x 00:33:58.101 08:30:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:58.101 08:30:31 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:33:58.101 08:30:31 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:33:58.101 08:30:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:58.101 08:30:31 -- common/autotest_common.sh@10 -- # set +x 00:33:58.101 08:30:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:58.101 08:30:31 -- host/discovery.sh@86 -- # get_subsystem_names 00:33:58.101 08:30:31 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:58.101 08:30:31 -- host/discovery.sh@59 -- # sort 00:33:58.101 08:30:31 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:58.101 08:30:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:58.101 08:30:31 -- common/autotest_common.sh@10 -- # set +x 00:33:58.101 08:30:31 -- host/discovery.sh@59 -- # xargs 00:33:58.101 08:30:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:58.101 08:30:31 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:33:58.101 08:30:31 -- host/discovery.sh@87 -- # get_bdev_list 00:33:58.101 08:30:31 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:58.101 08:30:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:58.101 08:30:31 -- common/autotest_common.sh@10 -- # set +x 00:33:58.101 08:30:31 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:58.101 08:30:31 -- host/discovery.sh@55 -- # sort 00:33:58.101 08:30:31 -- host/discovery.sh@55 -- # xargs 00:33:58.101 08:30:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:58.101 08:30:31 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:33:58.101 08:30:31 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:58.101 08:30:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:58.101 08:30:31 -- common/autotest_common.sh@10 -- # set +x 00:33:58.101 [2024-04-17 08:30:31.419158] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:58.101 08:30:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:58.101 08:30:31 -- host/discovery.sh@92 -- # get_subsystem_names 00:33:58.101 08:30:31 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:58.101 08:30:31 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:58.101 08:30:31 -- host/discovery.sh@59 -- # sort 00:33:58.101 08:30:31 -- host/discovery.sh@59 -- # xargs 00:33:58.101 08:30:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:58.101 08:30:31 -- common/autotest_common.sh@10 -- # set +x 00:33:58.359 08:30:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:58.359 08:30:31 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:33:58.359 08:30:31 -- host/discovery.sh@93 -- # get_bdev_list 00:33:58.359 08:30:31 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:58.359 08:30:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:58.359 08:30:31 -- common/autotest_common.sh@10 -- # set +x 00:33:58.359 08:30:31 -- host/discovery.sh@55 -- # sort 00:33:58.359 08:30:31 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:58.359 08:30:31 -- host/discovery.sh@55 -- # xargs 00:33:58.359 08:30:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:58.359 08:30:31 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:33:58.359 08:30:31 -- host/discovery.sh@94 -- # get_notification_count 00:33:58.359 08:30:31 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:33:58.359 08:30:31 -- host/discovery.sh@74 -- # jq '. | length' 00:33:58.359 08:30:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:58.359 08:30:31 -- common/autotest_common.sh@10 -- # set +x 00:33:58.359 08:30:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:58.359 08:30:31 -- host/discovery.sh@74 -- # notification_count=0 00:33:58.359 08:30:31 -- host/discovery.sh@75 -- # notify_id=0 00:33:58.359 08:30:31 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:33:58.359 08:30:31 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:33:58.359 08:30:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:58.359 08:30:31 -- common/autotest_common.sh@10 -- # set +x 00:33:58.359 08:30:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:58.359 08:30:31 -- host/discovery.sh@100 -- # sleep 1 00:33:58.926 [2024-04-17 08:30:32.063567] bdev_nvme.c:6753:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:58.926 [2024-04-17 08:30:32.063625] bdev_nvme.c:6833:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:58.926 [2024-04-17 08:30:32.063660] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:58.926 [2024-04-17 08:30:32.069645] bdev_nvme.c:6682:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:33:58.926 [2024-04-17 08:30:32.126402] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:58.926 [2024-04-17 08:30:32.126465] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:59.494 08:30:32 -- host/discovery.sh@101 -- # get_subsystem_names 00:33:59.494 08:30:32 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:59.494 08:30:32 -- host/discovery.sh@59 -- # sort 00:33:59.494 08:30:32 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:59.494 08:30:32 -- host/discovery.sh@59 -- # xargs 00:33:59.494 08:30:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:59.494 08:30:32 -- common/autotest_common.sh@10 -- # set +x 00:33:59.494 08:30:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:59.494 08:30:32 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:59.494 08:30:32 -- host/discovery.sh@102 -- # get_bdev_list 00:33:59.494 08:30:32 -- host/discovery.sh@55 -- # sort 00:33:59.494 08:30:32 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:59.494 08:30:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:59.494 08:30:32 -- common/autotest_common.sh@10 -- # set +x 00:33:59.494 08:30:32 -- host/discovery.sh@55 -- # xargs 00:33:59.494 08:30:32 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:59.494 08:30:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:59.494 08:30:32 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:33:59.494 08:30:32 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:33:59.494 08:30:32 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:59.494 08:30:32 -- host/discovery.sh@63 -- # sort -n 00:33:59.494 08:30:32 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:59.494 08:30:32 -- host/discovery.sh@63 -- # xargs 00:33:59.494 08:30:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:59.494 08:30:32 -- common/autotest_common.sh@10 -- # set +x 00:33:59.494 08:30:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:59.494 08:30:32 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:33:59.494 08:30:32 -- host/discovery.sh@104 -- # get_notification_count 00:33:59.494 08:30:32 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:33:59.494 08:30:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:59.494 08:30:32 -- common/autotest_common.sh@10 -- # set +x 00:33:59.494 08:30:32 -- host/discovery.sh@74 -- # jq '. | length' 00:33:59.494 08:30:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:59.494 08:30:32 -- host/discovery.sh@74 -- # notification_count=1 00:33:59.494 08:30:32 -- host/discovery.sh@75 -- # notify_id=1 00:33:59.494 08:30:32 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:33:59.494 08:30:32 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:33:59.494 08:30:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:59.494 08:30:32 -- common/autotest_common.sh@10 -- # set +x 00:33:59.494 08:30:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:59.494 08:30:32 -- host/discovery.sh@109 -- # sleep 1 00:34:00.869 08:30:33 -- host/discovery.sh@110 -- # get_bdev_list 00:34:00.869 08:30:33 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:00.869 08:30:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:00.869 08:30:33 -- common/autotest_common.sh@10 -- # set +x 00:34:00.869 08:30:33 -- host/discovery.sh@55 -- # xargs 00:34:00.869 08:30:33 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:00.869 08:30:33 -- host/discovery.sh@55 -- # sort 00:34:00.869 08:30:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:00.869 08:30:33 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:00.869 08:30:33 -- host/discovery.sh@111 -- # get_notification_count 00:34:00.869 08:30:33 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:34:00.869 08:30:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:00.869 08:30:33 -- common/autotest_common.sh@10 -- # set +x 00:34:00.869 08:30:33 -- host/discovery.sh@74 -- # jq '. | length' 00:34:00.869 08:30:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:00.869 08:30:33 -- host/discovery.sh@74 -- # notification_count=1 00:34:00.869 08:30:33 -- host/discovery.sh@75 -- # notify_id=2 00:34:00.869 08:30:33 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:34:00.869 08:30:33 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:34:00.869 08:30:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:00.869 08:30:33 -- common/autotest_common.sh@10 -- # set +x 00:34:00.869 [2024-04-17 08:30:33.930845] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:00.869 [2024-04-17 08:30:33.931766] bdev_nvme.c:6735:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:34:00.869 [2024-04-17 08:30:33.931806] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:00.869 08:30:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:00.869 08:30:33 -- host/discovery.sh@117 -- # sleep 1 00:34:00.869 [2024-04-17 08:30:33.937737] bdev_nvme.c:6677:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:34:00.869 [2024-04-17 08:30:33.994908] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:00.869 [2024-04-17 08:30:33.994954] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:00.869 [2024-04-17 08:30:33.994960] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:34:01.804 08:30:34 -- host/discovery.sh@118 -- # get_subsystem_names 00:34:01.804 08:30:34 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:01.804 08:30:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:01.804 08:30:34 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:01.804 08:30:34 -- common/autotest_common.sh@10 -- # set +x 00:34:01.804 08:30:34 -- host/discovery.sh@59 -- # sort 00:34:01.804 08:30:34 -- host/discovery.sh@59 -- # xargs 00:34:01.804 08:30:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:01.804 08:30:34 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:01.804 08:30:34 -- host/discovery.sh@119 -- # get_bdev_list 00:34:01.804 08:30:34 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:01.804 08:30:34 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:01.804 08:30:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:01.804 08:30:34 -- common/autotest_common.sh@10 -- # set +x 00:34:01.804 08:30:34 -- host/discovery.sh@55 -- # sort 00:34:01.804 08:30:34 -- host/discovery.sh@55 -- # xargs 00:34:01.804 08:30:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:01.804 08:30:35 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:01.804 08:30:35 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:34:01.804 08:30:35 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:01.804 08:30:35 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:01.804 08:30:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:01.804 08:30:35 -- common/autotest_common.sh@10 -- # set +x 00:34:01.804 08:30:35 -- host/discovery.sh@63 -- # sort -n 00:34:01.804 08:30:35 -- host/discovery.sh@63 -- # xargs 00:34:01.804 08:30:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:01.804 08:30:35 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:34:01.804 08:30:35 -- host/discovery.sh@121 -- # get_notification_count 00:34:01.804 08:30:35 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:34:01.804 08:30:35 -- host/discovery.sh@74 -- # jq '. | length' 00:34:01.804 08:30:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:01.805 08:30:35 -- common/autotest_common.sh@10 -- # set +x 00:34:01.805 08:30:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:01.805 08:30:35 -- host/discovery.sh@74 -- # notification_count=0 00:34:01.805 08:30:35 -- host/discovery.sh@75 -- # notify_id=2 00:34:01.805 08:30:35 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:34:01.805 08:30:35 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:01.805 08:30:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:01.805 08:30:35 -- common/autotest_common.sh@10 -- # set +x 00:34:02.063 [2024-04-17 08:30:35.139377] bdev_nvme.c:6735:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:34:02.063 [2024-04-17 08:30:35.139422] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:02.063 08:30:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:02.063 08:30:35 -- host/discovery.sh@127 -- # sleep 1 00:34:02.063 [2024-04-17 08:30:35.145352] bdev_nvme.c:6540:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:34:02.063 [2024-04-17 08:30:35.145386] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:34:02.063 [2024-04-17 08:30:35.145490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:02.063 [2024-04-17 08:30:35.145524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:02.063 [2024-04-17 08:30:35.145533] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:02.063 [2024-04-17 08:30:35.145540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:02.063 [2024-04-17 08:30:35.145547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:02.064 [2024-04-17 08:30:35.145554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:02.064 [2024-04-17 08:30:35.145560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:02.064 [2024-04-17 08:30:35.145567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:02.064 [2024-04-17 08:30:35.145574] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218ae60 is same with the state(5) to be set 00:34:03.009 08:30:36 -- host/discovery.sh@128 -- # get_subsystem_names 00:34:03.009 08:30:36 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:03.009 08:30:36 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:03.009 08:30:36 -- host/discovery.sh@59 -- # sort 00:34:03.009 08:30:36 -- host/discovery.sh@59 -- # xargs 00:34:03.009 08:30:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:03.009 08:30:36 -- common/autotest_common.sh@10 -- # set +x 00:34:03.009 08:30:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:03.009 08:30:36 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:03.009 08:30:36 -- host/discovery.sh@129 -- # get_bdev_list 00:34:03.010 08:30:36 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:03.010 08:30:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:03.010 08:30:36 -- common/autotest_common.sh@10 -- # set +x 00:34:03.010 08:30:36 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:03.010 08:30:36 -- host/discovery.sh@55 -- # sort 00:34:03.010 08:30:36 -- host/discovery.sh@55 -- # xargs 00:34:03.010 08:30:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:03.010 08:30:36 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:03.010 08:30:36 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:34:03.010 08:30:36 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:03.010 08:30:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:03.010 08:30:36 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:03.010 08:30:36 -- common/autotest_common.sh@10 -- # set +x 00:34:03.010 08:30:36 -- host/discovery.sh@63 -- # sort -n 00:34:03.010 08:30:36 -- host/discovery.sh@63 -- # xargs 00:34:03.010 08:30:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:03.010 08:30:36 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:34:03.010 08:30:36 -- host/discovery.sh@131 -- # get_notification_count 00:34:03.010 08:30:36 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:34:03.010 08:30:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:03.010 08:30:36 -- host/discovery.sh@74 -- # jq '. | length' 00:34:03.010 08:30:36 -- common/autotest_common.sh@10 -- # set +x 00:34:03.010 08:30:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:03.010 08:30:36 -- host/discovery.sh@74 -- # notification_count=0 00:34:03.010 08:30:36 -- host/discovery.sh@75 -- # notify_id=2 00:34:03.010 08:30:36 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:34:03.010 08:30:36 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:34:03.010 08:30:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:03.010 08:30:36 -- common/autotest_common.sh@10 -- # set +x 00:34:03.267 08:30:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:03.267 08:30:36 -- host/discovery.sh@135 -- # sleep 1 00:34:04.203 08:30:37 -- host/discovery.sh@136 -- # get_subsystem_names 00:34:04.203 08:30:37 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:04.203 08:30:37 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:04.203 08:30:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:04.203 08:30:37 -- common/autotest_common.sh@10 -- # set +x 00:34:04.203 08:30:37 -- host/discovery.sh@59 -- # xargs 00:34:04.203 08:30:37 -- host/discovery.sh@59 -- # sort 00:34:04.203 08:30:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:04.203 08:30:37 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:34:04.203 08:30:37 -- host/discovery.sh@137 -- # get_bdev_list 00:34:04.203 08:30:37 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:04.203 08:30:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:04.203 08:30:37 -- common/autotest_common.sh@10 -- # set +x 00:34:04.203 08:30:37 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:04.203 08:30:37 -- host/discovery.sh@55 -- # sort 00:34:04.203 08:30:37 -- host/discovery.sh@55 -- # xargs 00:34:04.203 08:30:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:04.203 08:30:37 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:34:04.203 08:30:37 -- host/discovery.sh@138 -- # get_notification_count 00:34:04.203 08:30:37 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:34:04.203 08:30:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:04.203 08:30:37 -- host/discovery.sh@74 -- # jq '. | length' 00:34:04.203 08:30:37 -- common/autotest_common.sh@10 -- # set +x 00:34:04.203 08:30:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:04.203 08:30:37 -- host/discovery.sh@74 -- # notification_count=2 00:34:04.203 08:30:37 -- host/discovery.sh@75 -- # notify_id=4 00:34:04.203 08:30:37 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:34:04.203 08:30:37 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:04.203 08:30:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:04.203 08:30:37 -- common/autotest_common.sh@10 -- # set +x 00:34:05.577 [2024-04-17 08:30:38.514065] bdev_nvme.c:6753:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:05.577 [2024-04-17 08:30:38.514107] bdev_nvme.c:6833:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:05.577 [2024-04-17 08:30:38.514123] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:05.577 [2024-04-17 08:30:38.520089] bdev_nvme.c:6682:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:34:05.577 [2024-04-17 08:30:38.579453] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:05.577 [2024-04-17 08:30:38.579512] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:34:05.577 08:30:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:05.577 08:30:38 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:05.577 08:30:38 -- common/autotest_common.sh@640 -- # local es=0 00:34:05.577 08:30:38 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:05.577 08:30:38 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:34:05.577 08:30:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:34:05.577 08:30:38 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:34:05.577 08:30:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:34:05.577 08:30:38 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:05.577 08:30:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:05.577 08:30:38 -- common/autotest_common.sh@10 -- # set +x 00:34:05.577 request: 00:34:05.577 { 00:34:05.577 "name": "nvme", 00:34:05.577 "trtype": "tcp", 00:34:05.577 "traddr": "10.0.0.2", 00:34:05.577 "hostnqn": "nqn.2021-12.io.spdk:test", 00:34:05.577 "adrfam": "ipv4", 00:34:05.577 "trsvcid": "8009", 00:34:05.577 "wait_for_attach": true, 00:34:05.577 "method": "bdev_nvme_start_discovery", 00:34:05.577 "req_id": 1 00:34:05.577 } 00:34:05.577 Got JSON-RPC error response 00:34:05.577 response: 00:34:05.577 { 00:34:05.577 "code": -17, 00:34:05.577 "message": "File exists" 00:34:05.577 } 00:34:05.577 08:30:38 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:34:05.577 08:30:38 -- common/autotest_common.sh@643 -- # es=1 00:34:05.577 08:30:38 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:34:05.577 08:30:38 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:34:05.577 08:30:38 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:34:05.577 08:30:38 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:34:05.577 08:30:38 -- host/discovery.sh@67 -- # sort 00:34:05.577 08:30:38 -- host/discovery.sh@67 -- # xargs 00:34:05.577 08:30:38 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:34:05.577 08:30:38 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:34:05.577 08:30:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:05.577 08:30:38 -- common/autotest_common.sh@10 -- # set +x 00:34:05.577 08:30:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:05.577 08:30:38 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:34:05.577 08:30:38 -- host/discovery.sh@147 -- # get_bdev_list 00:34:05.577 08:30:38 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:05.577 08:30:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:05.577 08:30:38 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:05.577 08:30:38 -- common/autotest_common.sh@10 -- # set +x 00:34:05.577 08:30:38 -- host/discovery.sh@55 -- # sort 00:34:05.577 08:30:38 -- host/discovery.sh@55 -- # xargs 00:34:05.577 08:30:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:05.577 08:30:38 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:05.577 08:30:38 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:05.577 08:30:38 -- common/autotest_common.sh@640 -- # local es=0 00:34:05.577 08:30:38 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:05.577 08:30:38 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:34:05.577 08:30:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:34:05.577 08:30:38 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:34:05.577 08:30:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:34:05.578 08:30:38 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:05.578 08:30:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:05.578 08:30:38 -- common/autotest_common.sh@10 -- # set +x 00:34:05.578 request: 00:34:05.578 { 00:34:05.578 "name": "nvme_second", 00:34:05.578 "trtype": "tcp", 00:34:05.578 "traddr": "10.0.0.2", 00:34:05.578 "hostnqn": "nqn.2021-12.io.spdk:test", 00:34:05.578 "adrfam": "ipv4", 00:34:05.578 "trsvcid": "8009", 00:34:05.578 "wait_for_attach": true, 00:34:05.578 "method": "bdev_nvme_start_discovery", 00:34:05.578 "req_id": 1 00:34:05.578 } 00:34:05.578 Got JSON-RPC error response 00:34:05.578 response: 00:34:05.578 { 00:34:05.578 "code": -17, 00:34:05.578 "message": "File exists" 00:34:05.578 } 00:34:05.578 08:30:38 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:34:05.578 08:30:38 -- common/autotest_common.sh@643 -- # es=1 00:34:05.578 08:30:38 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:34:05.578 08:30:38 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:34:05.578 08:30:38 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:34:05.578 08:30:38 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:34:05.578 08:30:38 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:34:05.578 08:30:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:05.578 08:30:38 -- common/autotest_common.sh@10 -- # set +x 00:34:05.578 08:30:38 -- host/discovery.sh@67 -- # xargs 00:34:05.578 08:30:38 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:34:05.578 08:30:38 -- host/discovery.sh@67 -- # sort 00:34:05.578 08:30:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:05.578 08:30:38 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:34:05.578 08:30:38 -- host/discovery.sh@153 -- # get_bdev_list 00:34:05.578 08:30:38 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:05.578 08:30:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:05.578 08:30:38 -- common/autotest_common.sh@10 -- # set +x 00:34:05.578 08:30:38 -- host/discovery.sh@55 -- # sort 00:34:05.578 08:30:38 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:05.578 08:30:38 -- host/discovery.sh@55 -- # xargs 00:34:05.578 08:30:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:05.578 08:30:38 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:05.578 08:30:38 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:34:05.578 08:30:38 -- common/autotest_common.sh@640 -- # local es=0 00:34:05.578 08:30:38 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:34:05.578 08:30:38 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:34:05.578 08:30:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:34:05.578 08:30:38 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:34:05.578 08:30:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:34:05.578 08:30:38 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:34:05.578 08:30:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:05.578 08:30:38 -- common/autotest_common.sh@10 -- # set +x 00:34:06.510 [2024-04-17 08:30:39.822948] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.510 [2024-04-17 08:30:39.823064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.510 [2024-04-17 08:30:39.823091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.510 [2024-04-17 08:30:39.823102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2187440 with addr=10.0.0.2, port=8010 00:34:06.510 [2024-04-17 08:30:39.823120] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:06.510 [2024-04-17 08:30:39.823127] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:06.510 [2024-04-17 08:30:39.823134] bdev_nvme.c:6815:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:34:07.883 [2024-04-17 08:30:40.821016] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:34:07.883 [2024-04-17 08:30:40.821113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:07.883 [2024-04-17 08:30:40.821140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:07.883 [2024-04-17 08:30:40.821151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2187440 with addr=10.0.0.2, port=8010 00:34:07.883 [2024-04-17 08:30:40.821170] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:07.883 [2024-04-17 08:30:40.821177] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:07.883 [2024-04-17 08:30:40.821184] bdev_nvme.c:6815:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:34:08.819 [2024-04-17 08:30:41.818949] bdev_nvme.c:6796:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:34:08.819 request: 00:34:08.819 { 00:34:08.819 "name": "nvme_second", 00:34:08.819 "trtype": "tcp", 00:34:08.819 "traddr": "10.0.0.2", 00:34:08.819 "hostnqn": "nqn.2021-12.io.spdk:test", 00:34:08.819 "adrfam": "ipv4", 00:34:08.819 "trsvcid": "8010", 00:34:08.819 "attach_timeout_ms": 3000, 00:34:08.819 "method": "bdev_nvme_start_discovery", 00:34:08.819 "req_id": 1 00:34:08.819 } 00:34:08.819 Got JSON-RPC error response 00:34:08.819 response: 00:34:08.819 { 00:34:08.819 "code": -110, 00:34:08.819 "message": "Connection timed out" 00:34:08.819 } 00:34:08.819 08:30:41 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:34:08.819 08:30:41 -- common/autotest_common.sh@643 -- # es=1 00:34:08.819 08:30:41 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:34:08.819 08:30:41 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:34:08.819 08:30:41 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:34:08.819 08:30:41 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:34:08.819 08:30:41 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:34:08.819 08:30:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:08.819 08:30:41 -- common/autotest_common.sh@10 -- # set +x 00:34:08.819 08:30:41 -- host/discovery.sh@67 -- # sort 00:34:08.819 08:30:41 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:34:08.819 08:30:41 -- host/discovery.sh@67 -- # xargs 00:34:08.819 08:30:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:08.819 08:30:41 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:34:08.819 08:30:41 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:34:08.819 08:30:41 -- host/discovery.sh@162 -- # kill 70647 00:34:08.819 08:30:41 -- host/discovery.sh@163 -- # nvmftestfini 00:34:08.819 08:30:41 -- nvmf/common.sh@476 -- # nvmfcleanup 00:34:08.819 08:30:41 -- nvmf/common.sh@116 -- # sync 00:34:08.819 08:30:42 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:34:08.819 08:30:42 -- nvmf/common.sh@119 -- # set +e 00:34:08.819 08:30:42 -- nvmf/common.sh@120 -- # for i in {1..20} 00:34:08.819 08:30:42 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:34:08.819 rmmod nvme_tcp 00:34:08.819 rmmod nvme_fabrics 00:34:08.819 rmmod nvme_keyring 00:34:08.819 08:30:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:34:08.819 08:30:42 -- nvmf/common.sh@123 -- # set -e 00:34:08.819 08:30:42 -- nvmf/common.sh@124 -- # return 0 00:34:08.819 08:30:42 -- nvmf/common.sh@477 -- # '[' -n 70615 ']' 00:34:08.819 08:30:42 -- nvmf/common.sh@478 -- # killprocess 70615 00:34:08.819 08:30:42 -- common/autotest_common.sh@926 -- # '[' -z 70615 ']' 00:34:08.819 08:30:42 -- common/autotest_common.sh@930 -- # kill -0 70615 00:34:08.819 08:30:42 -- common/autotest_common.sh@931 -- # uname 00:34:08.819 08:30:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:34:08.819 08:30:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 70615 00:34:08.819 08:30:42 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:34:08.819 08:30:42 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:34:08.819 killing process with pid 70615 00:34:08.819 08:30:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 70615' 00:34:08.819 08:30:42 -- common/autotest_common.sh@945 -- # kill 70615 00:34:08.819 08:30:42 -- common/autotest_common.sh@950 -- # wait 70615 00:34:09.078 08:30:42 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:34:09.078 08:30:42 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:34:09.078 08:30:42 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:34:09.078 08:30:42 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:09.078 08:30:42 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:34:09.078 08:30:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:09.078 08:30:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:09.078 08:30:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:09.078 08:30:42 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:34:09.078 00:34:09.078 real 0m13.871s 00:34:09.078 user 0m26.252s 00:34:09.078 sys 0m2.166s 00:34:09.078 08:30:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:09.078 08:30:42 -- common/autotest_common.sh@10 -- # set +x 00:34:09.078 ************************************ 00:34:09.078 END TEST nvmf_discovery 00:34:09.078 ************************************ 00:34:09.339 08:30:42 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:34:09.339 08:30:42 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:34:09.339 08:30:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:09.339 08:30:42 -- common/autotest_common.sh@10 -- # set +x 00:34:09.339 ************************************ 00:34:09.339 START TEST nvmf_discovery_remove_ifc 00:34:09.339 ************************************ 00:34:09.339 08:30:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:34:09.339 * Looking for test storage... 00:34:09.339 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:34:09.339 08:30:42 -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:34:09.339 08:30:42 -- nvmf/common.sh@7 -- # uname -s 00:34:09.339 08:30:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:09.339 08:30:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:09.339 08:30:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:09.339 08:30:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:09.339 08:30:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:09.339 08:30:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:09.339 08:30:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:09.339 08:30:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:09.339 08:30:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:09.339 08:30:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:09.339 08:30:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ce38300-f67f-48af-81f9-d51a7c54746d 00:34:09.339 08:30:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ce38300-f67f-48af-81f9-d51a7c54746d 00:34:09.339 08:30:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:09.339 08:30:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:09.339 08:30:42 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:34:09.339 08:30:42 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:34:09.339 08:30:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:09.339 08:30:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:09.339 08:30:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:09.339 08:30:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:09.339 08:30:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:09.340 08:30:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:09.340 08:30:42 -- paths/export.sh@5 -- # export PATH 00:34:09.340 08:30:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:09.340 08:30:42 -- nvmf/common.sh@46 -- # : 0 00:34:09.340 08:30:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:34:09.340 08:30:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:34:09.340 08:30:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:34:09.340 08:30:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:09.340 08:30:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:09.340 08:30:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:34:09.340 08:30:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:34:09.340 08:30:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:34:09.340 08:30:42 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:34:09.340 08:30:42 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:34:09.340 08:30:42 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:34:09.340 08:30:42 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:34:09.340 08:30:42 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:34:09.340 08:30:42 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:34:09.340 08:30:42 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:34:09.340 08:30:42 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:34:09.340 08:30:42 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:09.340 08:30:42 -- nvmf/common.sh@436 -- # prepare_net_devs 00:34:09.340 08:30:42 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:34:09.340 08:30:42 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:34:09.340 08:30:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:09.340 08:30:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:09.340 08:30:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:09.340 08:30:42 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:34:09.340 08:30:42 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:34:09.340 08:30:42 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:34:09.340 08:30:42 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:34:09.340 08:30:42 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:34:09.340 08:30:42 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:34:09.340 08:30:42 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:09.340 08:30:42 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:09.340 08:30:42 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:34:09.340 08:30:42 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:34:09.340 08:30:42 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:34:09.340 08:30:42 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:34:09.340 08:30:42 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:34:09.340 08:30:42 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:09.340 08:30:42 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:34:09.340 08:30:42 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:34:09.340 08:30:42 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:34:09.340 08:30:42 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:34:09.340 08:30:42 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:34:09.340 08:30:42 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:34:09.340 Cannot find device "nvmf_tgt_br" 00:34:09.340 08:30:42 -- nvmf/common.sh@154 -- # true 00:34:09.340 08:30:42 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:34:09.340 Cannot find device "nvmf_tgt_br2" 00:34:09.340 08:30:42 -- nvmf/common.sh@155 -- # true 00:34:09.340 08:30:42 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:34:09.340 08:30:42 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:34:09.340 Cannot find device "nvmf_tgt_br" 00:34:09.340 08:30:42 -- nvmf/common.sh@157 -- # true 00:34:09.340 08:30:42 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:34:09.340 Cannot find device "nvmf_tgt_br2" 00:34:09.340 08:30:42 -- nvmf/common.sh@158 -- # true 00:34:09.340 08:30:42 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:34:09.340 08:30:42 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:34:09.600 08:30:42 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:34:09.600 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:09.600 08:30:42 -- nvmf/common.sh@161 -- # true 00:34:09.600 08:30:42 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:34:09.600 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:09.600 08:30:42 -- nvmf/common.sh@162 -- # true 00:34:09.600 08:30:42 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:34:09.600 08:30:42 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:34:09.600 08:30:42 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:34:09.600 08:30:42 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:34:09.600 08:30:42 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:34:09.600 08:30:42 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:34:09.600 08:30:42 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:34:09.600 08:30:42 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:34:09.600 08:30:42 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:34:09.600 08:30:42 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:34:09.600 08:30:42 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:34:09.600 08:30:42 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:34:09.600 08:30:42 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:34:09.600 08:30:42 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:34:09.600 08:30:42 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:34:09.600 08:30:42 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:34:09.600 08:30:42 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:34:09.600 08:30:42 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:34:09.600 08:30:42 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:34:09.600 08:30:42 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:34:09.600 08:30:42 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:34:09.600 08:30:42 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:34:09.600 08:30:42 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:34:09.600 08:30:42 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:34:09.600 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:09.600 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:34:09.600 00:34:09.600 --- 10.0.0.2 ping statistics --- 00:34:09.600 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:09.600 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:34:09.600 08:30:42 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:34:09.600 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:34:09.600 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:34:09.600 00:34:09.600 --- 10.0.0.3 ping statistics --- 00:34:09.600 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:09.600 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:34:09.600 08:30:42 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:34:09.600 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:09.600 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:34:09.600 00:34:09.600 --- 10.0.0.1 ping statistics --- 00:34:09.600 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:09.600 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:34:09.600 08:30:42 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:09.600 08:30:42 -- nvmf/common.sh@421 -- # return 0 00:34:09.600 08:30:42 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:34:09.600 08:30:42 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:09.600 08:30:42 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:34:09.600 08:30:42 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:34:09.600 08:30:42 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:09.600 08:30:42 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:34:09.600 08:30:42 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:34:09.600 08:30:42 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:34:09.600 08:30:42 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:34:09.600 08:30:42 -- common/autotest_common.sh@712 -- # xtrace_disable 00:34:09.600 08:30:42 -- common/autotest_common.sh@10 -- # set +x 00:34:09.600 08:30:42 -- nvmf/common.sh@469 -- # nvmfpid=71136 00:34:09.600 08:30:42 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:34:09.600 08:30:42 -- nvmf/common.sh@470 -- # waitforlisten 71136 00:34:09.600 08:30:42 -- common/autotest_common.sh@819 -- # '[' -z 71136 ']' 00:34:09.600 08:30:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:09.600 08:30:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:34:09.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:09.600 08:30:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:09.600 08:30:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:34:09.600 08:30:42 -- common/autotest_common.sh@10 -- # set +x 00:34:09.859 [2024-04-17 08:30:42.940326] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:34:09.859 [2024-04-17 08:30:42.940423] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:09.859 [2024-04-17 08:30:43.082903] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:10.118 [2024-04-17 08:30:43.202908] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:34:10.118 [2024-04-17 08:30:43.203087] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:10.118 [2024-04-17 08:30:43.203100] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:10.118 [2024-04-17 08:30:43.203110] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:10.118 [2024-04-17 08:30:43.203144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:10.684 08:30:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:34:10.684 08:30:43 -- common/autotest_common.sh@852 -- # return 0 00:34:10.684 08:30:43 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:34:10.684 08:30:43 -- common/autotest_common.sh@718 -- # xtrace_disable 00:34:10.684 08:30:43 -- common/autotest_common.sh@10 -- # set +x 00:34:10.684 08:30:43 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:10.684 08:30:43 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:34:10.684 08:30:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:10.684 08:30:43 -- common/autotest_common.sh@10 -- # set +x 00:34:10.684 [2024-04-17 08:30:43.890627] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:10.684 [2024-04-17 08:30:43.898750] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:34:10.684 null0 00:34:10.684 [2024-04-17 08:30:43.934663] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:10.684 08:30:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:10.684 08:30:43 -- host/discovery_remove_ifc.sh@59 -- # hostpid=71168 00:34:10.684 08:30:43 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 71168 /tmp/host.sock 00:34:10.684 08:30:43 -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:34:10.684 08:30:43 -- common/autotest_common.sh@819 -- # '[' -z 71168 ']' 00:34:10.684 08:30:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:34:10.684 08:30:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:34:10.684 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:34:10.684 08:30:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:34:10.684 08:30:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:34:10.684 08:30:43 -- common/autotest_common.sh@10 -- # set +x 00:34:10.684 [2024-04-17 08:30:44.012609] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:34:10.684 [2024-04-17 08:30:44.012702] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71168 ] 00:34:10.944 [2024-04-17 08:30:44.152284] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:10.944 [2024-04-17 08:30:44.261962] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:34:10.944 [2024-04-17 08:30:44.262139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:11.882 08:30:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:34:11.882 08:30:44 -- common/autotest_common.sh@852 -- # return 0 00:34:11.882 08:30:44 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:11.882 08:30:44 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:34:11.882 08:30:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:11.882 08:30:44 -- common/autotest_common.sh@10 -- # set +x 00:34:11.882 08:30:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:11.882 08:30:44 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:34:11.882 08:30:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:11.882 08:30:44 -- common/autotest_common.sh@10 -- # set +x 00:34:11.882 08:30:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:11.882 08:30:44 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:34:11.882 08:30:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:11.882 08:30:44 -- common/autotest_common.sh@10 -- # set +x 00:34:12.816 [2024-04-17 08:30:45.999652] bdev_nvme.c:6753:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:12.816 [2024-04-17 08:30:45.999703] bdev_nvme.c:6833:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:12.816 [2024-04-17 08:30:45.999724] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:12.816 [2024-04-17 08:30:46.005702] bdev_nvme.c:6682:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:34:12.816 [2024-04-17 08:30:46.061755] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:34:12.816 [2024-04-17 08:30:46.061839] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:34:12.816 [2024-04-17 08:30:46.061868] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:34:12.816 [2024-04-17 08:30:46.061888] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:12.816 [2024-04-17 08:30:46.061917] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:12.816 08:30:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:12.816 08:30:46 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:34:12.817 08:30:46 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:12.817 08:30:46 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:12.817 08:30:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:12.817 [2024-04-17 08:30:46.068281] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x21d3bb0 was disconnected and freed. delete nvme_qpair. 00:34:12.817 08:30:46 -- common/autotest_common.sh@10 -- # set +x 00:34:12.817 08:30:46 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:12.817 08:30:46 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:12.817 08:30:46 -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:12.817 08:30:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:12.817 08:30:46 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:34:12.817 08:30:46 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:34:12.817 08:30:46 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:34:12.817 08:30:46 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:34:12.817 08:30:46 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:12.817 08:30:46 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:12.817 08:30:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:12.817 08:30:46 -- common/autotest_common.sh@10 -- # set +x 00:34:12.817 08:30:46 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:12.817 08:30:46 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:12.817 08:30:46 -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:12.817 08:30:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:13.075 08:30:46 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:13.075 08:30:46 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:14.007 08:30:47 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:14.007 08:30:47 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:14.007 08:30:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:14.007 08:30:47 -- common/autotest_common.sh@10 -- # set +x 00:34:14.007 08:30:47 -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:14.007 08:30:47 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:14.007 08:30:47 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:14.007 08:30:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:14.007 08:30:47 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:14.007 08:30:47 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:14.938 08:30:48 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:14.938 08:30:48 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:14.938 08:30:48 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:14.938 08:30:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:14.938 08:30:48 -- common/autotest_common.sh@10 -- # set +x 00:34:14.938 08:30:48 -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:14.938 08:30:48 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:14.938 08:30:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:14.938 08:30:48 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:14.938 08:30:48 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:16.310 08:30:49 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:16.310 08:30:49 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:16.310 08:30:49 -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:16.310 08:30:49 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:16.310 08:30:49 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:16.310 08:30:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:16.310 08:30:49 -- common/autotest_common.sh@10 -- # set +x 00:34:16.310 08:30:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:16.310 08:30:49 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:16.310 08:30:49 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:17.244 08:30:50 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:17.244 08:30:50 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:17.244 08:30:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:17.244 08:30:50 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:17.244 08:30:50 -- common/autotest_common.sh@10 -- # set +x 00:34:17.244 08:30:50 -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:17.244 08:30:50 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:17.244 08:30:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:17.244 08:30:50 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:17.244 08:30:50 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:18.179 08:30:51 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:18.179 08:30:51 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:18.179 08:30:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:18.179 08:30:51 -- common/autotest_common.sh@10 -- # set +x 00:34:18.179 08:30:51 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:18.179 08:30:51 -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:18.179 08:30:51 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:18.179 08:30:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:18.179 08:30:51 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:18.179 08:30:51 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:18.179 [2024-04-17 08:30:51.479055] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:34:18.179 [2024-04-17 08:30:51.479161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:18.179 [2024-04-17 08:30:51.479175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.179 [2024-04-17 08:30:51.479187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:18.179 [2024-04-17 08:30:51.479196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.179 [2024-04-17 08:30:51.479204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:18.179 [2024-04-17 08:30:51.479212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.179 [2024-04-17 08:30:51.479220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:18.179 [2024-04-17 08:30:51.479228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.179 [2024-04-17 08:30:51.479236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:34:18.179 [2024-04-17 08:30:51.479244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.179 [2024-04-17 08:30:51.479251] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21421c0 is same with the state(5) to be set 00:34:18.179 [2024-04-17 08:30:51.489031] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21421c0 (9): Bad file descriptor 00:34:18.179 [2024-04-17 08:30:51.499049] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:19.113 08:30:52 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:19.113 08:30:52 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:19.113 08:30:52 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:19.113 08:30:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:19.113 08:30:52 -- common/autotest_common.sh@10 -- # set +x 00:34:19.113 08:30:52 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:19.113 08:30:52 -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:19.372 [2024-04-17 08:30:52.553369] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:34:20.336 [2024-04-17 08:30:53.577369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:34:21.269 [2024-04-17 08:30:54.601377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:34:21.269 [2024-04-17 08:30:54.601501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21421c0 with addr=10.0.0.2, port=4420 00:34:21.269 [2024-04-17 08:30:54.601530] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21421c0 is same with the state(5) to be set 00:34:21.270 [2024-04-17 08:30:54.602158] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21421c0 (9): Bad file descriptor 00:34:21.270 [2024-04-17 08:30:54.602219] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.270 [2024-04-17 08:30:54.602261] bdev_nvme.c:6504:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:34:21.270 [2024-04-17 08:30:54.602362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:21.270 [2024-04-17 08:30:54.602385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.270 [2024-04-17 08:30:54.602402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:21.270 [2024-04-17 08:30:54.602413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.270 [2024-04-17 08:30:54.602425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:21.270 [2024-04-17 08:30:54.602434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.270 [2024-04-17 08:30:54.602446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:21.270 [2024-04-17 08:30:54.602456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.270 [2024-04-17 08:30:54.602468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:34:21.270 [2024-04-17 08:30:54.602479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.270 [2024-04-17 08:30:54.602491] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:34:21.528 [2024-04-17 08:30:54.602621] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2142420 (9): Bad file descriptor 00:34:21.528 [2024-04-17 08:30:54.603649] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:34:21.528 [2024-04-17 08:30:54.603684] nvme_ctrlr.c:1135:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:34:21.528 08:30:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:21.528 08:30:54 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:21.528 08:30:54 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:22.465 08:30:55 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:22.465 08:30:55 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:22.465 08:30:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:22.465 08:30:55 -- common/autotest_common.sh@10 -- # set +x 00:34:22.465 08:30:55 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:22.465 08:30:55 -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:22.465 08:30:55 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:22.465 08:30:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:22.465 08:30:55 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:34:22.465 08:30:55 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:34:22.465 08:30:55 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:34:22.465 08:30:55 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:34:22.465 08:30:55 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:22.465 08:30:55 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:22.465 08:30:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:22.465 08:30:55 -- common/autotest_common.sh@10 -- # set +x 00:34:22.465 08:30:55 -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:22.465 08:30:55 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:22.465 08:30:55 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:22.465 08:30:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:22.465 08:30:55 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:34:22.465 08:30:55 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:23.401 [2024-04-17 08:30:56.604238] bdev_nvme.c:6753:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:23.401 [2024-04-17 08:30:56.604277] bdev_nvme.c:6833:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:23.401 [2024-04-17 08:30:56.604294] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:23.401 [2024-04-17 08:30:56.610277] bdev_nvme.c:6682:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:34:23.401 [2024-04-17 08:30:56.665492] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:34:23.401 [2024-04-17 08:30:56.665560] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:34:23.401 [2024-04-17 08:30:56.665582] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:34:23.401 [2024-04-17 08:30:56.665599] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:34:23.401 [2024-04-17 08:30:56.665607] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:23.401 [2024-04-17 08:30:56.672897] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x21a5cb0 was disconnected and freed. delete nvme_qpair. 00:34:23.660 08:30:56 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:23.660 08:30:56 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:23.660 08:30:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:23.660 08:30:56 -- common/autotest_common.sh@10 -- # set +x 00:34:23.660 08:30:56 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:23.660 08:30:56 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:23.660 08:30:56 -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:23.660 08:30:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:23.660 08:30:56 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:34:23.660 08:30:56 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:34:23.660 08:30:56 -- host/discovery_remove_ifc.sh@90 -- # killprocess 71168 00:34:23.660 08:30:56 -- common/autotest_common.sh@926 -- # '[' -z 71168 ']' 00:34:23.660 08:30:56 -- common/autotest_common.sh@930 -- # kill -0 71168 00:34:23.660 08:30:56 -- common/autotest_common.sh@931 -- # uname 00:34:23.660 08:30:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:34:23.660 08:30:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71168 00:34:23.660 killing process with pid 71168 00:34:23.660 08:30:56 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:34:23.660 08:30:56 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:34:23.660 08:30:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71168' 00:34:23.660 08:30:56 -- common/autotest_common.sh@945 -- # kill 71168 00:34:23.660 08:30:56 -- common/autotest_common.sh@950 -- # wait 71168 00:34:23.919 08:30:57 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:34:23.919 08:30:57 -- nvmf/common.sh@476 -- # nvmfcleanup 00:34:23.919 08:30:57 -- nvmf/common.sh@116 -- # sync 00:34:23.919 08:30:57 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:34:23.919 08:30:57 -- nvmf/common.sh@119 -- # set +e 00:34:23.919 08:30:57 -- nvmf/common.sh@120 -- # for i in {1..20} 00:34:23.919 08:30:57 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:34:23.919 rmmod nvme_tcp 00:34:23.919 rmmod nvme_fabrics 00:34:23.919 rmmod nvme_keyring 00:34:23.919 08:30:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:34:23.919 08:30:57 -- nvmf/common.sh@123 -- # set -e 00:34:23.919 08:30:57 -- nvmf/common.sh@124 -- # return 0 00:34:23.919 08:30:57 -- nvmf/common.sh@477 -- # '[' -n 71136 ']' 00:34:23.919 08:30:57 -- nvmf/common.sh@478 -- # killprocess 71136 00:34:23.919 08:30:57 -- common/autotest_common.sh@926 -- # '[' -z 71136 ']' 00:34:23.919 08:30:57 -- common/autotest_common.sh@930 -- # kill -0 71136 00:34:23.919 08:30:57 -- common/autotest_common.sh@931 -- # uname 00:34:23.919 08:30:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:34:23.919 08:30:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71136 00:34:23.919 killing process with pid 71136 00:34:23.919 08:30:57 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:34:23.919 08:30:57 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:34:23.920 08:30:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71136' 00:34:23.920 08:30:57 -- common/autotest_common.sh@945 -- # kill 71136 00:34:23.920 08:30:57 -- common/autotest_common.sh@950 -- # wait 71136 00:34:24.179 08:30:57 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:34:24.179 08:30:57 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:34:24.179 08:30:57 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:34:24.179 08:30:57 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:24.179 08:30:57 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:34:24.179 08:30:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:24.179 08:30:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:24.179 08:30:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:24.179 08:30:57 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:34:24.179 00:34:24.179 real 0m15.038s 00:34:24.179 user 0m24.079s 00:34:24.179 sys 0m2.274s 00:34:24.179 08:30:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:24.179 08:30:57 -- common/autotest_common.sh@10 -- # set +x 00:34:24.179 ************************************ 00:34:24.179 END TEST nvmf_discovery_remove_ifc 00:34:24.179 ************************************ 00:34:24.439 08:30:57 -- nvmf/nvmf.sh@105 -- # [[ tcp == \t\c\p ]] 00:34:24.439 08:30:57 -- nvmf/nvmf.sh@106 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:34:24.439 08:30:57 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:34:24.439 08:30:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:24.439 08:30:57 -- common/autotest_common.sh@10 -- # set +x 00:34:24.439 ************************************ 00:34:24.439 START TEST nvmf_digest 00:34:24.439 ************************************ 00:34:24.439 08:30:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:34:24.439 * Looking for test storage... 00:34:24.439 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:34:24.439 08:30:57 -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:34:24.439 08:30:57 -- nvmf/common.sh@7 -- # uname -s 00:34:24.439 08:30:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:24.439 08:30:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:24.439 08:30:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:24.439 08:30:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:24.439 08:30:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:24.439 08:30:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:24.439 08:30:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:24.439 08:30:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:24.439 08:30:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:24.439 08:30:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:24.439 08:30:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ce38300-f67f-48af-81f9-d51a7c54746d 00:34:24.439 08:30:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ce38300-f67f-48af-81f9-d51a7c54746d 00:34:24.439 08:30:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:24.439 08:30:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:24.439 08:30:57 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:34:24.439 08:30:57 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:34:24.439 08:30:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:24.439 08:30:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:24.439 08:30:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:24.439 08:30:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:24.439 08:30:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:24.439 08:30:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:24.439 08:30:57 -- paths/export.sh@5 -- # export PATH 00:34:24.439 08:30:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:24.439 08:30:57 -- nvmf/common.sh@46 -- # : 0 00:34:24.439 08:30:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:34:24.439 08:30:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:34:24.439 08:30:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:34:24.439 08:30:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:24.439 08:30:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:24.439 08:30:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:34:24.439 08:30:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:34:24.439 08:30:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:34:24.439 08:30:57 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:34:24.439 08:30:57 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:34:24.439 08:30:57 -- host/digest.sh@16 -- # runtime=2 00:34:24.439 08:30:57 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:34:24.439 08:30:57 -- host/digest.sh@132 -- # nvmftestinit 00:34:24.439 08:30:57 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:34:24.439 08:30:57 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:24.439 08:30:57 -- nvmf/common.sh@436 -- # prepare_net_devs 00:34:24.439 08:30:57 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:34:24.439 08:30:57 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:34:24.439 08:30:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:24.439 08:30:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:24.439 08:30:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:24.439 08:30:57 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:34:24.439 08:30:57 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:34:24.439 08:30:57 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:34:24.439 08:30:57 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:34:24.439 08:30:57 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:34:24.439 08:30:57 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:34:24.439 08:30:57 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:24.439 08:30:57 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:24.439 08:30:57 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:34:24.439 08:30:57 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:34:24.439 08:30:57 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:34:24.439 08:30:57 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:34:24.439 08:30:57 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:34:24.439 08:30:57 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:24.439 08:30:57 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:34:24.439 08:30:57 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:34:24.439 08:30:57 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:34:24.439 08:30:57 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:34:24.439 08:30:57 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:34:24.439 08:30:57 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:34:24.439 Cannot find device "nvmf_tgt_br" 00:34:24.439 08:30:57 -- nvmf/common.sh@154 -- # true 00:34:24.439 08:30:57 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:34:24.439 Cannot find device "nvmf_tgt_br2" 00:34:24.439 08:30:57 -- nvmf/common.sh@155 -- # true 00:34:24.439 08:30:57 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:34:24.439 08:30:57 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:34:24.439 Cannot find device "nvmf_tgt_br" 00:34:24.439 08:30:57 -- nvmf/common.sh@157 -- # true 00:34:24.439 08:30:57 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:34:24.439 Cannot find device "nvmf_tgt_br2" 00:34:24.439 08:30:57 -- nvmf/common.sh@158 -- # true 00:34:24.440 08:30:57 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:34:24.699 08:30:57 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:34:24.699 08:30:57 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:34:24.699 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:24.699 08:30:57 -- nvmf/common.sh@161 -- # true 00:34:24.699 08:30:57 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:34:24.699 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:24.699 08:30:57 -- nvmf/common.sh@162 -- # true 00:34:24.699 08:30:57 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:34:24.699 08:30:57 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:34:24.699 08:30:57 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:34:24.700 08:30:57 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:34:24.700 08:30:57 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:34:24.700 08:30:57 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:34:24.700 08:30:57 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:34:24.700 08:30:57 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:34:24.700 08:30:57 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:34:24.700 08:30:57 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:34:24.700 08:30:57 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:34:24.700 08:30:57 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:34:24.700 08:30:57 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:34:24.700 08:30:57 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:34:24.700 08:30:57 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:34:24.700 08:30:57 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:34:24.700 08:30:57 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:34:24.700 08:30:57 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:34:24.700 08:30:57 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:34:24.700 08:30:57 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:34:24.700 08:30:58 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:34:24.700 08:30:58 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:34:24.700 08:30:58 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:34:24.700 08:30:58 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:34:24.959 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:24.959 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:34:24.959 00:34:24.959 --- 10.0.0.2 ping statistics --- 00:34:24.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:24.959 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:34:24.959 08:30:58 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:34:24.959 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:34:24.959 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.093 ms 00:34:24.959 00:34:24.959 --- 10.0.0.3 ping statistics --- 00:34:24.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:24.959 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:34:24.959 08:30:58 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:34:24.959 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:24.959 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:34:24.959 00:34:24.959 --- 10.0.0.1 ping statistics --- 00:34:24.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:24.959 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:34:24.959 08:30:58 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:24.959 08:30:58 -- nvmf/common.sh@421 -- # return 0 00:34:24.959 08:30:58 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:34:24.959 08:30:58 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:24.959 08:30:58 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:34:24.959 08:30:58 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:34:24.959 08:30:58 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:24.959 08:30:58 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:34:24.959 08:30:58 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:34:24.960 08:30:58 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:34:24.960 08:30:58 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:34:24.960 08:30:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:34:24.960 08:30:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:24.960 08:30:58 -- common/autotest_common.sh@10 -- # set +x 00:34:24.960 ************************************ 00:34:24.960 START TEST nvmf_digest_clean 00:34:24.960 ************************************ 00:34:24.960 08:30:58 -- common/autotest_common.sh@1104 -- # run_digest 00:34:24.960 08:30:58 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:34:24.960 08:30:58 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:34:24.960 08:30:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:34:24.960 08:30:58 -- common/autotest_common.sh@10 -- # set +x 00:34:24.960 08:30:58 -- nvmf/common.sh@469 -- # nvmfpid=71583 00:34:24.960 08:30:58 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:34:24.960 08:30:58 -- nvmf/common.sh@470 -- # waitforlisten 71583 00:34:24.960 08:30:58 -- common/autotest_common.sh@819 -- # '[' -z 71583 ']' 00:34:24.960 08:30:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:24.960 08:30:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:34:24.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:24.960 08:30:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:24.960 08:30:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:34:24.960 08:30:58 -- common/autotest_common.sh@10 -- # set +x 00:34:24.960 [2024-04-17 08:30:58.154215] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:34:24.960 [2024-04-17 08:30:58.154336] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:24.960 [2024-04-17 08:30:58.286394] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:25.219 [2024-04-17 08:30:58.403365] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:34:25.219 [2024-04-17 08:30:58.403536] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:25.219 [2024-04-17 08:30:58.403548] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:25.219 [2024-04-17 08:30:58.403558] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:25.219 [2024-04-17 08:30:58.403592] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:26.156 08:30:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:34:26.156 08:30:59 -- common/autotest_common.sh@852 -- # return 0 00:34:26.156 08:30:59 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:34:26.156 08:30:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:34:26.156 08:30:59 -- common/autotest_common.sh@10 -- # set +x 00:34:26.156 08:30:59 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:26.156 08:30:59 -- host/digest.sh@120 -- # common_target_config 00:34:26.156 08:30:59 -- host/digest.sh@43 -- # rpc_cmd 00:34:26.156 08:30:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:26.156 08:30:59 -- common/autotest_common.sh@10 -- # set +x 00:34:26.156 null0 00:34:26.156 [2024-04-17 08:30:59.282785] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:26.157 [2024-04-17 08:30:59.310874] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:26.157 08:30:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:26.157 08:30:59 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:34:26.157 08:30:59 -- host/digest.sh@77 -- # local rw bs qd 00:34:26.157 08:30:59 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:26.157 08:30:59 -- host/digest.sh@80 -- # rw=randread 00:34:26.157 08:30:59 -- host/digest.sh@80 -- # bs=4096 00:34:26.157 08:30:59 -- host/digest.sh@80 -- # qd=128 00:34:26.157 08:30:59 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:34:26.157 08:30:59 -- host/digest.sh@82 -- # bperfpid=71615 00:34:26.157 08:30:59 -- host/digest.sh@83 -- # waitforlisten 71615 /var/tmp/bperf.sock 00:34:26.157 08:30:59 -- common/autotest_common.sh@819 -- # '[' -z 71615 ']' 00:34:26.157 08:30:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:26.157 08:30:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:34:26.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:26.157 08:30:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:26.157 08:30:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:34:26.157 08:30:59 -- common/autotest_common.sh@10 -- # set +x 00:34:26.157 [2024-04-17 08:30:59.391677] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:34:26.157 [2024-04-17 08:30:59.391769] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71615 ] 00:34:26.416 [2024-04-17 08:30:59.532150] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:26.416 [2024-04-17 08:30:59.635992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:26.984 08:31:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:34:26.984 08:31:00 -- common/autotest_common.sh@852 -- # return 0 00:34:26.984 08:31:00 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:34:26.984 08:31:00 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:34:26.984 08:31:00 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:27.242 08:31:00 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:27.242 08:31:00 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:28.048 nvme0n1 00:34:28.048 08:31:00 -- host/digest.sh@91 -- # bperf_py perform_tests 00:34:28.048 08:31:00 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:28.048 Running I/O for 2 seconds... 00:34:29.945 00:34:29.945 Latency(us) 00:34:29.945 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:29.945 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:34:29.945 nvme0n1 : 2.00 15520.74 60.63 0.00 0.00 8242.38 6782.55 18201.26 00:34:29.945 =================================================================================================================== 00:34:29.945 Total : 15520.74 60.63 0.00 0.00 8242.38 6782.55 18201.26 00:34:29.945 0 00:34:29.945 08:31:03 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:34:29.945 08:31:03 -- host/digest.sh@92 -- # get_accel_stats 00:34:29.945 08:31:03 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:29.945 | select(.opcode=="crc32c") 00:34:29.945 | "\(.module_name) \(.executed)"' 00:34:29.945 08:31:03 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:29.945 08:31:03 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:29.945 08:31:03 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:34:29.945 08:31:03 -- host/digest.sh@93 -- # exp_module=software 00:34:29.945 08:31:03 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:34:29.945 08:31:03 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:34:29.945 08:31:03 -- host/digest.sh@97 -- # killprocess 71615 00:34:29.945 08:31:03 -- common/autotest_common.sh@926 -- # '[' -z 71615 ']' 00:34:29.945 08:31:03 -- common/autotest_common.sh@930 -- # kill -0 71615 00:34:29.945 08:31:03 -- common/autotest_common.sh@931 -- # uname 00:34:29.945 08:31:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:34:29.945 08:31:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71615 00:34:30.203 08:31:03 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:34:30.203 08:31:03 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:34:30.203 killing process with pid 71615 00:34:30.203 08:31:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71615' 00:34:30.203 08:31:03 -- common/autotest_common.sh@945 -- # kill 71615 00:34:30.203 Received shutdown signal, test time was about 2.000000 seconds 00:34:30.203 00:34:30.203 Latency(us) 00:34:30.203 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:30.203 =================================================================================================================== 00:34:30.203 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:30.203 08:31:03 -- common/autotest_common.sh@950 -- # wait 71615 00:34:30.203 08:31:03 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:34:30.203 08:31:03 -- host/digest.sh@77 -- # local rw bs qd 00:34:30.203 08:31:03 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:30.203 08:31:03 -- host/digest.sh@80 -- # rw=randread 00:34:30.203 08:31:03 -- host/digest.sh@80 -- # bs=131072 00:34:30.203 08:31:03 -- host/digest.sh@80 -- # qd=16 00:34:30.203 08:31:03 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:34:30.203 08:31:03 -- host/digest.sh@82 -- # bperfpid=71670 00:34:30.203 08:31:03 -- host/digest.sh@83 -- # waitforlisten 71670 /var/tmp/bperf.sock 00:34:30.203 08:31:03 -- common/autotest_common.sh@819 -- # '[' -z 71670 ']' 00:34:30.203 08:31:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:30.203 08:31:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:34:30.203 08:31:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:30.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:30.203 08:31:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:34:30.203 08:31:03 -- common/autotest_common.sh@10 -- # set +x 00:34:30.462 [2024-04-17 08:31:03.577634] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:34:30.462 [2024-04-17 08:31:03.577740] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71670 ] 00:34:30.462 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:30.462 Zero copy mechanism will not be used. 00:34:30.462 [2024-04-17 08:31:03.716818] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:30.719 [2024-04-17 08:31:03.822115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:31.284 08:31:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:34:31.285 08:31:04 -- common/autotest_common.sh@852 -- # return 0 00:34:31.285 08:31:04 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:34:31.285 08:31:04 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:34:31.285 08:31:04 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:31.542 08:31:04 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:31.542 08:31:04 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:31.800 nvme0n1 00:34:31.800 08:31:05 -- host/digest.sh@91 -- # bperf_py perform_tests 00:34:31.800 08:31:05 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:31.800 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:31.800 Zero copy mechanism will not be used. 00:34:31.800 Running I/O for 2 seconds... 00:34:34.332 00:34:34.332 Latency(us) 00:34:34.332 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:34.332 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:34:34.332 nvme0n1 : 2.00 7837.10 979.64 0.00 0.00 2038.68 1760.03 10130.89 00:34:34.332 =================================================================================================================== 00:34:34.332 Total : 7837.10 979.64 0.00 0.00 2038.68 1760.03 10130.89 00:34:34.332 0 00:34:34.332 08:31:07 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:34:34.332 08:31:07 -- host/digest.sh@92 -- # get_accel_stats 00:34:34.332 08:31:07 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:34.332 | select(.opcode=="crc32c") 00:34:34.332 | "\(.module_name) \(.executed)"' 00:34:34.332 08:31:07 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:34.332 08:31:07 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:34.332 08:31:07 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:34:34.332 08:31:07 -- host/digest.sh@93 -- # exp_module=software 00:34:34.332 08:31:07 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:34:34.332 08:31:07 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:34:34.332 08:31:07 -- host/digest.sh@97 -- # killprocess 71670 00:34:34.332 08:31:07 -- common/autotest_common.sh@926 -- # '[' -z 71670 ']' 00:34:34.332 08:31:07 -- common/autotest_common.sh@930 -- # kill -0 71670 00:34:34.332 08:31:07 -- common/autotest_common.sh@931 -- # uname 00:34:34.332 08:31:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:34:34.332 08:31:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71670 00:34:34.332 08:31:07 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:34:34.332 08:31:07 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:34:34.332 08:31:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71670' 00:34:34.332 killing process with pid 71670 00:34:34.332 08:31:07 -- common/autotest_common.sh@945 -- # kill 71670 00:34:34.332 Received shutdown signal, test time was about 2.000000 seconds 00:34:34.332 00:34:34.333 Latency(us) 00:34:34.333 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:34.333 =================================================================================================================== 00:34:34.333 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:34.333 08:31:07 -- common/autotest_common.sh@950 -- # wait 71670 00:34:34.333 08:31:07 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:34:34.333 08:31:07 -- host/digest.sh@77 -- # local rw bs qd 00:34:34.333 08:31:07 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:34.333 08:31:07 -- host/digest.sh@80 -- # rw=randwrite 00:34:34.333 08:31:07 -- host/digest.sh@80 -- # bs=4096 00:34:34.333 08:31:07 -- host/digest.sh@80 -- # qd=128 00:34:34.333 08:31:07 -- host/digest.sh@82 -- # bperfpid=71730 00:34:34.333 08:31:07 -- host/digest.sh@83 -- # waitforlisten 71730 /var/tmp/bperf.sock 00:34:34.333 08:31:07 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:34:34.333 08:31:07 -- common/autotest_common.sh@819 -- # '[' -z 71730 ']' 00:34:34.333 08:31:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:34.333 08:31:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:34:34.333 08:31:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:34.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:34.333 08:31:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:34:34.333 08:31:07 -- common/autotest_common.sh@10 -- # set +x 00:34:34.591 [2024-04-17 08:31:07.685923] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:34:34.591 [2024-04-17 08:31:07.686002] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71730 ] 00:34:34.591 [2024-04-17 08:31:07.824437] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:34.849 [2024-04-17 08:31:07.930452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:35.417 08:31:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:34:35.417 08:31:08 -- common/autotest_common.sh@852 -- # return 0 00:34:35.417 08:31:08 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:34:35.417 08:31:08 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:34:35.417 08:31:08 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:35.676 08:31:08 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:35.676 08:31:08 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:35.935 nvme0n1 00:34:35.935 08:31:09 -- host/digest.sh@91 -- # bperf_py perform_tests 00:34:35.935 08:31:09 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:36.193 Running I/O for 2 seconds... 00:34:38.203 00:34:38.203 Latency(us) 00:34:38.203 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:38.203 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:38.203 nvme0n1 : 2.00 17375.42 67.87 0.00 0.00 7361.02 6181.56 16140.74 00:34:38.203 =================================================================================================================== 00:34:38.203 Total : 17375.42 67.87 0.00 0.00 7361.02 6181.56 16140.74 00:34:38.203 0 00:34:38.203 08:31:11 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:34:38.203 08:31:11 -- host/digest.sh@92 -- # get_accel_stats 00:34:38.203 08:31:11 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:38.203 | select(.opcode=="crc32c") 00:34:38.203 | "\(.module_name) \(.executed)"' 00:34:38.203 08:31:11 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:38.203 08:31:11 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:38.462 08:31:11 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:34:38.462 08:31:11 -- host/digest.sh@93 -- # exp_module=software 00:34:38.462 08:31:11 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:34:38.462 08:31:11 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:34:38.462 08:31:11 -- host/digest.sh@97 -- # killprocess 71730 00:34:38.462 08:31:11 -- common/autotest_common.sh@926 -- # '[' -z 71730 ']' 00:34:38.462 08:31:11 -- common/autotest_common.sh@930 -- # kill -0 71730 00:34:38.462 08:31:11 -- common/autotest_common.sh@931 -- # uname 00:34:38.462 08:31:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:34:38.462 08:31:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71730 00:34:38.462 killing process with pid 71730 00:34:38.462 Received shutdown signal, test time was about 2.000000 seconds 00:34:38.462 00:34:38.462 Latency(us) 00:34:38.462 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:38.462 =================================================================================================================== 00:34:38.462 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:38.462 08:31:11 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:34:38.462 08:31:11 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:34:38.462 08:31:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71730' 00:34:38.462 08:31:11 -- common/autotest_common.sh@945 -- # kill 71730 00:34:38.462 08:31:11 -- common/autotest_common.sh@950 -- # wait 71730 00:34:38.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:38.721 08:31:11 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:34:38.721 08:31:11 -- host/digest.sh@77 -- # local rw bs qd 00:34:38.721 08:31:11 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:38.721 08:31:11 -- host/digest.sh@80 -- # rw=randwrite 00:34:38.722 08:31:11 -- host/digest.sh@80 -- # bs=131072 00:34:38.722 08:31:11 -- host/digest.sh@80 -- # qd=16 00:34:38.722 08:31:11 -- host/digest.sh@82 -- # bperfpid=71790 00:34:38.722 08:31:11 -- host/digest.sh@83 -- # waitforlisten 71790 /var/tmp/bperf.sock 00:34:38.722 08:31:11 -- common/autotest_common.sh@819 -- # '[' -z 71790 ']' 00:34:38.722 08:31:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:38.722 08:31:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:34:38.722 08:31:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:38.722 08:31:11 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:34:38.722 08:31:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:34:38.722 08:31:11 -- common/autotest_common.sh@10 -- # set +x 00:34:38.722 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:38.722 Zero copy mechanism will not be used. 00:34:38.722 [2024-04-17 08:31:11.876829] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:34:38.722 [2024-04-17 08:31:11.877052] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71790 ] 00:34:38.722 [2024-04-17 08:31:12.003708] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:38.980 [2024-04-17 08:31:12.119324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:39.545 08:31:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:34:39.545 08:31:12 -- common/autotest_common.sh@852 -- # return 0 00:34:39.545 08:31:12 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:34:39.545 08:31:12 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:34:39.545 08:31:12 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:39.803 08:31:13 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:39.803 08:31:13 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:40.061 nvme0n1 00:34:40.061 08:31:13 -- host/digest.sh@91 -- # bperf_py perform_tests 00:34:40.061 08:31:13 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:40.320 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:40.320 Zero copy mechanism will not be used. 00:34:40.320 Running I/O for 2 seconds... 00:34:42.224 00:34:42.224 Latency(us) 00:34:42.224 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:42.224 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:34:42.224 nvme0n1 : 2.00 6892.26 861.53 0.00 0.00 2316.81 1509.62 9272.34 00:34:42.224 =================================================================================================================== 00:34:42.224 Total : 6892.26 861.53 0.00 0.00 2316.81 1509.62 9272.34 00:34:42.224 0 00:34:42.224 08:31:15 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:34:42.224 08:31:15 -- host/digest.sh@92 -- # get_accel_stats 00:34:42.224 08:31:15 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:42.224 | select(.opcode=="crc32c") 00:34:42.224 | "\(.module_name) \(.executed)"' 00:34:42.224 08:31:15 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:42.224 08:31:15 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:42.498 08:31:15 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:34:42.498 08:31:15 -- host/digest.sh@93 -- # exp_module=software 00:34:42.498 08:31:15 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:34:42.498 08:31:15 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:34:42.498 08:31:15 -- host/digest.sh@97 -- # killprocess 71790 00:34:42.498 08:31:15 -- common/autotest_common.sh@926 -- # '[' -z 71790 ']' 00:34:42.498 08:31:15 -- common/autotest_common.sh@930 -- # kill -0 71790 00:34:42.498 08:31:15 -- common/autotest_common.sh@931 -- # uname 00:34:42.498 08:31:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:34:42.498 08:31:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71790 00:34:42.498 killing process with pid 71790 00:34:42.498 Received shutdown signal, test time was about 2.000000 seconds 00:34:42.498 00:34:42.498 Latency(us) 00:34:42.498 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:42.498 =================================================================================================================== 00:34:42.498 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:42.498 08:31:15 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:34:42.498 08:31:15 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:34:42.498 08:31:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71790' 00:34:42.498 08:31:15 -- common/autotest_common.sh@945 -- # kill 71790 00:34:42.498 08:31:15 -- common/autotest_common.sh@950 -- # wait 71790 00:34:42.762 08:31:15 -- host/digest.sh@126 -- # killprocess 71583 00:34:42.762 08:31:15 -- common/autotest_common.sh@926 -- # '[' -z 71583 ']' 00:34:42.762 08:31:15 -- common/autotest_common.sh@930 -- # kill -0 71583 00:34:42.762 08:31:15 -- common/autotest_common.sh@931 -- # uname 00:34:42.762 08:31:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:34:42.762 08:31:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71583 00:34:42.762 killing process with pid 71583 00:34:42.762 08:31:15 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:34:42.762 08:31:15 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:34:42.762 08:31:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71583' 00:34:42.762 08:31:15 -- common/autotest_common.sh@945 -- # kill 71583 00:34:42.762 08:31:15 -- common/autotest_common.sh@950 -- # wait 71583 00:34:43.021 00:34:43.021 real 0m18.194s 00:34:43.021 user 0m34.757s 00:34:43.021 sys 0m4.544s 00:34:43.021 08:31:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:43.021 08:31:16 -- common/autotest_common.sh@10 -- # set +x 00:34:43.021 ************************************ 00:34:43.021 END TEST nvmf_digest_clean 00:34:43.021 ************************************ 00:34:43.021 08:31:16 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:34:43.021 08:31:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:34:43.021 08:31:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:43.021 08:31:16 -- common/autotest_common.sh@10 -- # set +x 00:34:43.021 ************************************ 00:34:43.021 START TEST nvmf_digest_error 00:34:43.021 ************************************ 00:34:43.021 08:31:16 -- common/autotest_common.sh@1104 -- # run_digest_error 00:34:43.021 08:31:16 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:34:43.021 08:31:16 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:34:43.021 08:31:16 -- common/autotest_common.sh@712 -- # xtrace_disable 00:34:43.021 08:31:16 -- common/autotest_common.sh@10 -- # set +x 00:34:43.280 08:31:16 -- nvmf/common.sh@469 -- # nvmfpid=71874 00:34:43.280 08:31:16 -- nvmf/common.sh@470 -- # waitforlisten 71874 00:34:43.280 08:31:16 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:34:43.280 08:31:16 -- common/autotest_common.sh@819 -- # '[' -z 71874 ']' 00:34:43.280 08:31:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:43.280 08:31:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:34:43.280 08:31:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:43.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:43.280 08:31:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:34:43.280 08:31:16 -- common/autotest_common.sh@10 -- # set +x 00:34:43.280 [2024-04-17 08:31:16.421377] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:34:43.280 [2024-04-17 08:31:16.421469] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:43.280 [2024-04-17 08:31:16.560535] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:43.538 [2024-04-17 08:31:16.713073] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:34:43.538 [2024-04-17 08:31:16.713225] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:43.538 [2024-04-17 08:31:16.713232] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:43.538 [2024-04-17 08:31:16.713238] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:43.538 [2024-04-17 08:31:16.713265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:44.106 08:31:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:34:44.106 08:31:17 -- common/autotest_common.sh@852 -- # return 0 00:34:44.106 08:31:17 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:34:44.106 08:31:17 -- common/autotest_common.sh@718 -- # xtrace_disable 00:34:44.106 08:31:17 -- common/autotest_common.sh@10 -- # set +x 00:34:44.106 08:31:17 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:44.106 08:31:17 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:34:44.106 08:31:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:44.106 08:31:17 -- common/autotest_common.sh@10 -- # set +x 00:34:44.106 [2024-04-17 08:31:17.380491] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:34:44.106 08:31:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:44.106 08:31:17 -- host/digest.sh@104 -- # common_target_config 00:34:44.106 08:31:17 -- host/digest.sh@43 -- # rpc_cmd 00:34:44.106 08:31:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:44.106 08:31:17 -- common/autotest_common.sh@10 -- # set +x 00:34:44.366 null0 00:34:44.366 [2024-04-17 08:31:17.533968] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:44.366 [2024-04-17 08:31:17.558039] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:44.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:44.366 08:31:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:44.366 08:31:17 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:34:44.366 08:31:17 -- host/digest.sh@54 -- # local rw bs qd 00:34:44.366 08:31:17 -- host/digest.sh@56 -- # rw=randread 00:34:44.366 08:31:17 -- host/digest.sh@56 -- # bs=4096 00:34:44.366 08:31:17 -- host/digest.sh@56 -- # qd=128 00:34:44.366 08:31:17 -- host/digest.sh@58 -- # bperfpid=71906 00:34:44.366 08:31:17 -- host/digest.sh@60 -- # waitforlisten 71906 /var/tmp/bperf.sock 00:34:44.366 08:31:17 -- common/autotest_common.sh@819 -- # '[' -z 71906 ']' 00:34:44.366 08:31:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:44.366 08:31:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:34:44.366 08:31:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:44.366 08:31:17 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:34:44.366 08:31:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:34:44.366 08:31:17 -- common/autotest_common.sh@10 -- # set +x 00:34:44.366 [2024-04-17 08:31:17.615480] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:34:44.366 [2024-04-17 08:31:17.615666] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71906 ] 00:34:44.625 [2024-04-17 08:31:17.742982] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:44.625 [2024-04-17 08:31:17.867788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:45.562 08:31:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:34:45.562 08:31:18 -- common/autotest_common.sh@852 -- # return 0 00:34:45.562 08:31:18 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:45.562 08:31:18 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:45.562 08:31:18 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:34:45.562 08:31:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:45.562 08:31:18 -- common/autotest_common.sh@10 -- # set +x 00:34:45.562 08:31:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:45.562 08:31:18 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:45.562 08:31:18 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:45.822 nvme0n1 00:34:45.822 08:31:19 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:34:45.822 08:31:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:45.822 08:31:19 -- common/autotest_common.sh@10 -- # set +x 00:34:46.081 08:31:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:46.081 08:31:19 -- host/digest.sh@69 -- # bperf_py perform_tests 00:34:46.081 08:31:19 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:46.081 Running I/O for 2 seconds... 00:34:46.081 [2024-04-17 08:31:19.306445] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:46.081 [2024-04-17 08:31:19.306652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.081 [2024-04-17 08:31:19.306715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:46.081 [2024-04-17 08:31:19.322642] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:46.081 [2024-04-17 08:31:19.322814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.081 [2024-04-17 08:31:19.322872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:46.081 [2024-04-17 08:31:19.339159] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:46.081 [2024-04-17 08:31:19.339322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.081 [2024-04-17 08:31:19.339387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:46.081 [2024-04-17 08:31:19.355311] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:46.081 [2024-04-17 08:31:19.355449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.081 [2024-04-17 08:31:19.355507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:46.081 [2024-04-17 08:31:19.371719] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:46.081 [2024-04-17 08:31:19.371895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.081 [2024-04-17 08:31:19.371907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:46.081 [2024-04-17 08:31:19.387771] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:46.081 [2024-04-17 08:31:19.387827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.081 [2024-04-17 08:31:19.387851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:46.081 [2024-04-17 08:31:19.403724] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:46.081 [2024-04-17 08:31:19.403783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.081 [2024-04-17 08:31:19.403797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:46.343 [2024-04-17 08:31:19.419792] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:46.343 [2024-04-17 08:31:19.419873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.343 [2024-04-17 08:31:19.419886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:46.343 [2024-04-17 08:31:19.436028] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:46.343 [2024-04-17 08:31:19.436101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:11803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.343 [2024-04-17 08:31:19.436114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:46.343 [2024-04-17 08:31:19.451823] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:46.343 [2024-04-17 08:31:19.451912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:17181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.343 [2024-04-17 08:31:19.451932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:46.343 [2024-04-17 08:31:19.469196] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:46.343 [2024-04-17 08:31:19.469276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:4923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.343 [2024-04-17 08:31:19.469296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:46.343 [2024-04-17 08:31:19.486042] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:46.343 [2024-04-17 08:31:19.486112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:5431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.343 [2024-04-17 08:31:19.486130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:46.343 [2024-04-17 08:31:19.502339] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:46.343 [2024-04-17 08:31:19.502400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:23049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.343 [2024-04-17 08:31:19.502413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:46.343 [2024-04-17 08:31:19.517443] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:46.343 [2024-04-17 08:31:19.517497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.343 [2024-04-17 08:31:19.517509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:46.343 [2024-04-17 08:31:19.533136] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:46.343 [2024-04-17 08:31:19.533188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:10142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.343 [2024-04-17 08:31:19.533198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:46.343 [2024-04-17 08:31:19.547980] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:46.343 [2024-04-17 08:31:19.548022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.343 [2024-04-17 08:31:19.548032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:46.343 [2024-04-17 08:31:19.562027] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:46.343 [2024-04-17 08:31:19.562063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:11892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.343 [2024-04-17 08:31:19.562072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:46.343 [2024-04-17 08:31:19.576524] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:46.343 [2024-04-17 08:31:19.576566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:1074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.343 [2024-04-17 08:31:19.576577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:46.343 [2024-04-17 08:31:19.593180] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:46.343 [2024-04-17 08:31:19.593249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:22417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.343 [2024-04-17 08:31:19.593266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:46.343 [2024-04-17 08:31:19.609774] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:46.343 [2024-04-17 08:31:19.609827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:20155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.343 [2024-04-17 08:31:19.609839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:46.343 [2024-04-17 08:31:19.626148] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:46.343 [2024-04-17 08:31:19.626197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.343 [2024-04-17 08:31:19.626208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:46.343 [2024-04-17 08:31:19.642233] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:46.343 [2024-04-17 08:31:19.642284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:20391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.343 [2024-04-17 08:31:19.642295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:46.343 [2024-04-17 08:31:19.658287] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:46.343 [2024-04-17 08:31:19.658344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:16813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.343 [2024-04-17 08:31:19.658356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:46.343 [2024-04-17 08:31:19.674266] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:46.343 [2024-04-17 08:31:19.674327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:1572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.343 [2024-04-17 08:31:19.674339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:46.610 [2024-04-17 08:31:19.690529] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:46.610 [2024-04-17 08:31:19.690569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:5686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.610 [2024-04-17 08:31:19.690581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:46.610 [2024-04-17 08:31:19.706383] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:46.610 [2024-04-17 08:31:19.706428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:18489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.610 [2024-04-17 08:31:19.706440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:46.610 [2024-04-17 08:31:19.722384] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:46.610 [2024-04-17 08:31:19.722426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:16251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.610 [2024-04-17 08:31:19.722437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:46.610 [2024-04-17 08:31:19.738311] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:46.610 [2024-04-17 08:31:19.738353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:4467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.610 [2024-04-17 08:31:19.738365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:46.610 [2024-04-17 08:31:19.754709] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:46.610 [2024-04-17 08:31:19.754756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:22049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.610 [2024-04-17 08:31:19.754767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:46.610 [2024-04-17 08:31:19.770962] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:46.610 [2024-04-17 08:31:19.771008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:6294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.610 [2024-04-17 08:31:19.771019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:46.610 [2024-04-17 08:31:19.786696] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:46.610 [2024-04-17 08:31:19.786742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:20342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.610 [2024-04-17 08:31:19.786752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:46.610 [2024-04-17 08:31:19.801471] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:46.610 [2024-04-17 08:31:19.801515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:14414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.610 [2024-04-17 08:31:19.801525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:46.610 [2024-04-17 08:31:19.815802] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:46.610 [2024-04-17 08:31:19.815841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:6062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.610 [2024-04-17 08:31:19.815852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:46.610 [2024-04-17 08:31:19.831064] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:46.610 [2024-04-17 08:31:19.831181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.610 [2024-04-17 08:31:19.831250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:46.610 [2024-04-17 08:31:19.846217] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:46.610 [2024-04-17 08:31:19.846334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:22279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.610 [2024-04-17 08:31:19.846386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:46.610 [2024-04-17 08:31:19.862444] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:46.610 [2024-04-17 08:31:19.862555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:18635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.610 [2024-04-17 08:31:19.862615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:46.610 [2024-04-17 08:31:19.878123] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:46.610 [2024-04-17 08:31:19.878238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:15872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.610 [2024-04-17 08:31:19.878250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:46.610 [2024-04-17 08:31:19.893734] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:46.610 [2024-04-17 08:31:19.893777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:18073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.610 [2024-04-17 08:31:19.893788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:46.610 [2024-04-17 08:31:19.909012] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:46.610 [2024-04-17 08:31:19.909052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:22662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.610 [2024-04-17 08:31:19.909062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:46.610 [2024-04-17 08:31:19.924438] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:46.610 [2024-04-17 08:31:19.924479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:8558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.610 [2024-04-17 08:31:19.924489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:46.879 [2024-04-17 08:31:19.940849] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:46.879 [2024-04-17 08:31:19.940891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:18471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.879 [2024-04-17 08:31:19.940902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:46.879 [2024-04-17 08:31:19.969574] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:46.879 [2024-04-17 08:31:19.969613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.879 [2024-04-17 08:31:19.969623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:46.879 [2024-04-17 08:31:19.986144] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:46.879 [2024-04-17 08:31:19.986182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:21822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.879 [2024-04-17 08:31:19.986192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:46.879 [2024-04-17 08:31:20.002133] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:46.879 [2024-04-17 08:31:20.002181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:3886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.879 [2024-04-17 08:31:20.002192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:46.879 [2024-04-17 08:31:20.019717] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:46.879 [2024-04-17 08:31:20.019765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:18213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.879 [2024-04-17 08:31:20.019776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:46.879 [2024-04-17 08:31:20.035347] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:46.879 [2024-04-17 08:31:20.035393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.879 [2024-04-17 08:31:20.035404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:46.879 [2024-04-17 08:31:20.051165] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:46.879 [2024-04-17 08:31:20.051216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:20329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.879 [2024-04-17 08:31:20.051227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:46.879 [2024-04-17 08:31:20.067471] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:46.879 [2024-04-17 08:31:20.067532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:21360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.879 [2024-04-17 08:31:20.067544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:46.879 [2024-04-17 08:31:20.084841] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:46.880 [2024-04-17 08:31:20.084927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:7150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.880 [2024-04-17 08:31:20.084948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:46.880 [2024-04-17 08:31:20.103407] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:46.880 [2024-04-17 08:31:20.103488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:5546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.880 [2024-04-17 08:31:20.103508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:46.880 [2024-04-17 08:31:20.120983] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:46.880 [2024-04-17 08:31:20.121074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:2094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.880 [2024-04-17 08:31:20.121094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:46.880 [2024-04-17 08:31:20.139375] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:46.880 [2024-04-17 08:31:20.139444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:6665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.880 [2024-04-17 08:31:20.139462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:46.880 [2024-04-17 08:31:20.156252] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:46.880 [2024-04-17 08:31:20.156341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:16736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.880 [2024-04-17 08:31:20.156355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:46.880 [2024-04-17 08:31:20.173160] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:46.880 [2024-04-17 08:31:20.173233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:18822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.880 [2024-04-17 08:31:20.173246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:46.880 [2024-04-17 08:31:20.189985] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:46.880 [2024-04-17 08:31:20.190064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:12712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.880 [2024-04-17 08:31:20.190077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:46.880 [2024-04-17 08:31:20.207018] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:46.880 [2024-04-17 08:31:20.207093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:14144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:46.880 [2024-04-17 08:31:20.207107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.139 [2024-04-17 08:31:20.223940] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:47.139 [2024-04-17 08:31:20.224011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.139 [2024-04-17 08:31:20.224024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.139 [2024-04-17 08:31:20.240621] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:47.139 [2024-04-17 08:31:20.240690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:7212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.140 [2024-04-17 08:31:20.240705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.140 [2024-04-17 08:31:20.257563] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:47.140 [2024-04-17 08:31:20.257636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:9442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.140 [2024-04-17 08:31:20.257649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.140 [2024-04-17 08:31:20.274291] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:47.140 [2024-04-17 08:31:20.274388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:25033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.140 [2024-04-17 08:31:20.274402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.140 [2024-04-17 08:31:20.291015] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:47.140 [2024-04-17 08:31:20.291093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.140 [2024-04-17 08:31:20.291106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.140 [2024-04-17 08:31:20.307670] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:47.140 [2024-04-17 08:31:20.307744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:6089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.140 [2024-04-17 08:31:20.307756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.140 [2024-04-17 08:31:20.324573] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:47.140 [2024-04-17 08:31:20.324648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:22484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.140 [2024-04-17 08:31:20.324660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.140 [2024-04-17 08:31:20.348904] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:47.140 [2024-04-17 08:31:20.348974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:21705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.140 [2024-04-17 08:31:20.348987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.140 [2024-04-17 08:31:20.365737] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:47.140 [2024-04-17 08:31:20.365798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.140 [2024-04-17 08:31:20.365810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.140 [2024-04-17 08:31:20.382496] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:47.140 [2024-04-17 08:31:20.382570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:22012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.140 [2024-04-17 08:31:20.382582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.140 [2024-04-17 08:31:20.399259] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:47.140 [2024-04-17 08:31:20.399344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:18279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.140 [2024-04-17 08:31:20.399357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.140 [2024-04-17 08:31:20.416230] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:47.140 [2024-04-17 08:31:20.416337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:4568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.140 [2024-04-17 08:31:20.416354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.140 [2024-04-17 08:31:20.433669] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:47.140 [2024-04-17 08:31:20.433753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.140 [2024-04-17 08:31:20.433768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.140 [2024-04-17 08:31:20.450790] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:47.140 [2024-04-17 08:31:20.450874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.140 [2024-04-17 08:31:20.450888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.140 [2024-04-17 08:31:20.467944] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:47.140 [2024-04-17 08:31:20.468020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:8565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.140 [2024-04-17 08:31:20.468034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.400 [2024-04-17 08:31:20.484800] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:47.400 [2024-04-17 08:31:20.484874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:24313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.400 [2024-04-17 08:31:20.484887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.400 [2024-04-17 08:31:20.501488] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:47.400 [2024-04-17 08:31:20.501577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:9994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.400 [2024-04-17 08:31:20.501595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.400 [2024-04-17 08:31:20.518033] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:47.400 [2024-04-17 08:31:20.518099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:18884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.400 [2024-04-17 08:31:20.518111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.400 [2024-04-17 08:31:20.534515] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:47.400 [2024-04-17 08:31:20.534572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:2760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.400 [2024-04-17 08:31:20.534610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.400 [2024-04-17 08:31:20.550343] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:47.400 [2024-04-17 08:31:20.550399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:18596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.400 [2024-04-17 08:31:20.550410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.400 [2024-04-17 08:31:20.566644] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:47.400 [2024-04-17 08:31:20.566699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:9594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.400 [2024-04-17 08:31:20.566711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.400 [2024-04-17 08:31:20.582850] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:47.400 [2024-04-17 08:31:20.582901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:8645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.400 [2024-04-17 08:31:20.582912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.401 [2024-04-17 08:31:20.599679] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:47.401 [2024-04-17 08:31:20.599742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:2328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.401 [2024-04-17 08:31:20.599754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.401 [2024-04-17 08:31:20.616316] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:47.401 [2024-04-17 08:31:20.616377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:20411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.401 [2024-04-17 08:31:20.616391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.401 [2024-04-17 08:31:20.632806] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:47.401 [2024-04-17 08:31:20.632874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:12030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.401 [2024-04-17 08:31:20.632886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.401 [2024-04-17 08:31:20.649494] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:47.401 [2024-04-17 08:31:20.649582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:16689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.401 [2024-04-17 08:31:20.649604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.401 [2024-04-17 08:31:20.666209] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:47.401 [2024-04-17 08:31:20.666271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:12561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.401 [2024-04-17 08:31:20.666288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.401 [2024-04-17 08:31:20.682776] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:47.401 [2024-04-17 08:31:20.682840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:9111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.401 [2024-04-17 08:31:20.682853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.401 [2024-04-17 08:31:20.699437] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:47.401 [2024-04-17 08:31:20.699504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.401 [2024-04-17 08:31:20.699517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.401 [2024-04-17 08:31:20.716571] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:47.401 [2024-04-17 08:31:20.716630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.401 [2024-04-17 08:31:20.716643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.660 [2024-04-17 08:31:20.733086] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:47.660 [2024-04-17 08:31:20.733151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.660 [2024-04-17 08:31:20.733165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.660 [2024-04-17 08:31:20.749111] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:47.660 [2024-04-17 08:31:20.749167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.660 [2024-04-17 08:31:20.749179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.660 [2024-04-17 08:31:20.766879] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:47.660 [2024-04-17 08:31:20.766941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.660 [2024-04-17 08:31:20.766955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.660 [2024-04-17 08:31:20.783644] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:47.660 [2024-04-17 08:31:20.783708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.660 [2024-04-17 08:31:20.783720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.660 [2024-04-17 08:31:20.799469] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:47.660 [2024-04-17 08:31:20.799530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.660 [2024-04-17 08:31:20.799543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.661 [2024-04-17 08:31:20.815432] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:47.661 [2024-04-17 08:31:20.815490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.661 [2024-04-17 08:31:20.815502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.661 [2024-04-17 08:31:20.830919] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:47.661 [2024-04-17 08:31:20.830974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.661 [2024-04-17 08:31:20.830985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.661 [2024-04-17 08:31:20.845836] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:47.661 [2024-04-17 08:31:20.845892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:6956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.661 [2024-04-17 08:31:20.845908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.661 [2024-04-17 08:31:20.860898] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:47.661 [2024-04-17 08:31:20.860947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:21400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.661 [2024-04-17 08:31:20.860957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.661 [2024-04-17 08:31:20.876039] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:47.661 [2024-04-17 08:31:20.876087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.661 [2024-04-17 08:31:20.876098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.661 [2024-04-17 08:31:20.891890] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:47.661 [2024-04-17 08:31:20.891939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:3728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.661 [2024-04-17 08:31:20.891950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.661 [2024-04-17 08:31:20.908236] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:47.661 [2024-04-17 08:31:20.908294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:12458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.661 [2024-04-17 08:31:20.908324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.661 [2024-04-17 08:31:20.925844] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:47.661 [2024-04-17 08:31:20.925915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:12685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.661 [2024-04-17 08:31:20.925934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.661 [2024-04-17 08:31:20.943532] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:47.661 [2024-04-17 08:31:20.943611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:17061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.661 [2024-04-17 08:31:20.943630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.661 [2024-04-17 08:31:20.961522] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:47.661 [2024-04-17 08:31:20.961611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:15347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.661 [2024-04-17 08:31:20.961629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.661 [2024-04-17 08:31:20.979475] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:47.661 [2024-04-17 08:31:20.979548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:25590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.661 [2024-04-17 08:31:20.979566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.920 [2024-04-17 08:31:20.995217] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:47.920 [2024-04-17 08:31:20.995273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:23179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.920 [2024-04-17 08:31:20.995285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.920 [2024-04-17 08:31:21.011036] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:47.920 [2024-04-17 08:31:21.011100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:24996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.920 [2024-04-17 08:31:21.011112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.920 [2024-04-17 08:31:21.026274] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:47.920 [2024-04-17 08:31:21.026337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:18884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.920 [2024-04-17 08:31:21.026347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.920 [2024-04-17 08:31:21.041950] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:47.920 [2024-04-17 08:31:21.041997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:12046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.920 [2024-04-17 08:31:21.042009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.920 [2024-04-17 08:31:21.058150] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:47.920 [2024-04-17 08:31:21.058235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:9945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.920 [2024-04-17 08:31:21.058248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.920 [2024-04-17 08:31:21.074656] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:47.920 [2024-04-17 08:31:21.074741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:22680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.920 [2024-04-17 08:31:21.074753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.921 [2024-04-17 08:31:21.091529] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:47.921 [2024-04-17 08:31:21.091593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:23557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.921 [2024-04-17 08:31:21.091606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.921 [2024-04-17 08:31:21.109371] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:47.921 [2024-04-17 08:31:21.109459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:13663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.921 [2024-04-17 08:31:21.109478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.921 [2024-04-17 08:31:21.126532] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:47.921 [2024-04-17 08:31:21.126617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:7089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.921 [2024-04-17 08:31:21.126636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.921 [2024-04-17 08:31:21.142885] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:47.921 [2024-04-17 08:31:21.142942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:3046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.921 [2024-04-17 08:31:21.142954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.921 [2024-04-17 08:31:21.158529] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:47.921 [2024-04-17 08:31:21.158576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:22182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.921 [2024-04-17 08:31:21.158596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.921 [2024-04-17 08:31:21.174828] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:47.921 [2024-04-17 08:31:21.174874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:3527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.921 [2024-04-17 08:31:21.174885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.921 [2024-04-17 08:31:21.190393] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:47.921 [2024-04-17 08:31:21.190433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:14652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.921 [2024-04-17 08:31:21.190444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.921 [2024-04-17 08:31:21.206018] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:47.921 [2024-04-17 08:31:21.206059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:25187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.921 [2024-04-17 08:31:21.206069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.921 [2024-04-17 08:31:21.221643] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:47.921 [2024-04-17 08:31:21.221687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:16266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.921 [2024-04-17 08:31:21.221699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.921 [2024-04-17 08:31:21.237567] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:47.921 [2024-04-17 08:31:21.237611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:21651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.921 [2024-04-17 08:31:21.237622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.180 [2024-04-17 08:31:21.253233] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:48.180 [2024-04-17 08:31:21.253284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:11413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.180 [2024-04-17 08:31:21.253296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.180 [2024-04-17 08:31:21.268713] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7340) 00:34:48.180 [2024-04-17 08:31:21.268755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:17237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.181 [2024-04-17 08:31:21.268765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.181 00:34:48.181 Latency(us) 00:34:48.181 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:48.181 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:34:48.181 nvme0n1 : 2.01 15327.43 59.87 0.00 0.00 8345.49 6782.55 36631.48 00:34:48.181 =================================================================================================================== 00:34:48.181 Total : 15327.43 59.87 0.00 0.00 8345.49 6782.55 36631.48 00:34:48.181 0 00:34:48.181 08:31:21 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:34:48.181 08:31:21 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:34:48.181 08:31:21 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:34:48.181 08:31:21 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:34:48.181 | .driver_specific 00:34:48.181 | .nvme_error 00:34:48.181 | .status_code 00:34:48.181 | .command_transient_transport_error' 00:34:48.440 08:31:21 -- host/digest.sh@71 -- # (( 120 > 0 )) 00:34:48.440 08:31:21 -- host/digest.sh@73 -- # killprocess 71906 00:34:48.440 08:31:21 -- common/autotest_common.sh@926 -- # '[' -z 71906 ']' 00:34:48.440 08:31:21 -- common/autotest_common.sh@930 -- # kill -0 71906 00:34:48.440 08:31:21 -- common/autotest_common.sh@931 -- # uname 00:34:48.440 08:31:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:34:48.440 08:31:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71906 00:34:48.440 killing process with pid 71906 00:34:48.440 Received shutdown signal, test time was about 2.000000 seconds 00:34:48.440 00:34:48.440 Latency(us) 00:34:48.440 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:48.440 =================================================================================================================== 00:34:48.440 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:48.440 08:31:21 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:34:48.440 08:31:21 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:34:48.440 08:31:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71906' 00:34:48.440 08:31:21 -- common/autotest_common.sh@945 -- # kill 71906 00:34:48.440 08:31:21 -- common/autotest_common.sh@950 -- # wait 71906 00:34:48.700 08:31:21 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:34:48.700 08:31:21 -- host/digest.sh@54 -- # local rw bs qd 00:34:48.700 08:31:21 -- host/digest.sh@56 -- # rw=randread 00:34:48.700 08:31:21 -- host/digest.sh@56 -- # bs=131072 00:34:48.700 08:31:21 -- host/digest.sh@56 -- # qd=16 00:34:48.700 08:31:21 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:34:48.700 08:31:21 -- host/digest.sh@58 -- # bperfpid=71967 00:34:48.700 08:31:21 -- host/digest.sh@60 -- # waitforlisten 71967 /var/tmp/bperf.sock 00:34:48.700 08:31:21 -- common/autotest_common.sh@819 -- # '[' -z 71967 ']' 00:34:48.700 08:31:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:48.700 08:31:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:34:48.700 08:31:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:48.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:48.700 08:31:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:34:48.700 08:31:21 -- common/autotest_common.sh@10 -- # set +x 00:34:48.700 [2024-04-17 08:31:21.880829] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:34:48.700 [2024-04-17 08:31:21.881444] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71967 ] 00:34:48.700 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:48.700 Zero copy mechanism will not be used. 00:34:48.700 [2024-04-17 08:31:22.006922] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:48.959 [2024-04-17 08:31:22.128945] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:49.527 08:31:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:34:49.527 08:31:22 -- common/autotest_common.sh@852 -- # return 0 00:34:49.527 08:31:22 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:49.527 08:31:22 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:49.786 08:31:22 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:34:49.786 08:31:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:49.786 08:31:22 -- common/autotest_common.sh@10 -- # set +x 00:34:49.786 08:31:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:49.786 08:31:22 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:49.786 08:31:22 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:50.044 nvme0n1 00:34:50.044 08:31:23 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:34:50.044 08:31:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:50.044 08:31:23 -- common/autotest_common.sh@10 -- # set +x 00:34:50.044 08:31:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:50.044 08:31:23 -- host/digest.sh@69 -- # bperf_py perform_tests 00:34:50.044 08:31:23 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:50.044 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:50.044 Zero copy mechanism will not be used. 00:34:50.044 Running I/O for 2 seconds... 00:34:50.044 [2024-04-17 08:31:23.348172] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.044 [2024-04-17 08:31:23.348226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.044 [2024-04-17 08:31:23.348239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:50.044 [2024-04-17 08:31:23.352503] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.044 [2024-04-17 08:31:23.352543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.044 [2024-04-17 08:31:23.352553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:50.044 [2024-04-17 08:31:23.356752] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.044 [2024-04-17 08:31:23.356789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.044 [2024-04-17 08:31:23.356798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:50.044 [2024-04-17 08:31:23.361023] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.044 [2024-04-17 08:31:23.361061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.044 [2024-04-17 08:31:23.361070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.044 [2024-04-17 08:31:23.365270] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.044 [2024-04-17 08:31:23.365322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.045 [2024-04-17 08:31:23.365332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:50.045 [2024-04-17 08:31:23.369519] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.045 [2024-04-17 08:31:23.369555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.045 [2024-04-17 08:31:23.369564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:50.045 [2024-04-17 08:31:23.373711] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.045 [2024-04-17 08:31:23.373745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.045 [2024-04-17 08:31:23.373754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:50.307 [2024-04-17 08:31:23.377937] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.307 [2024-04-17 08:31:23.377973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.307 [2024-04-17 08:31:23.377982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.307 [2024-04-17 08:31:23.382138] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.307 [2024-04-17 08:31:23.382170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.307 [2024-04-17 08:31:23.382179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:50.307 [2024-04-17 08:31:23.386462] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.307 [2024-04-17 08:31:23.386495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.307 [2024-04-17 08:31:23.386503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:50.307 [2024-04-17 08:31:23.390719] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.307 [2024-04-17 08:31:23.390752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.307 [2024-04-17 08:31:23.390762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:50.307 [2024-04-17 08:31:23.394940] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.307 [2024-04-17 08:31:23.394976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.307 [2024-04-17 08:31:23.394985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.307 [2024-04-17 08:31:23.399162] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.307 [2024-04-17 08:31:23.399198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.307 [2024-04-17 08:31:23.399207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:50.307 [2024-04-17 08:31:23.403351] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.307 [2024-04-17 08:31:23.403384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.307 [2024-04-17 08:31:23.403393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:50.307 [2024-04-17 08:31:23.407537] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.307 [2024-04-17 08:31:23.407572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.307 [2024-04-17 08:31:23.407580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:50.307 [2024-04-17 08:31:23.411808] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.307 [2024-04-17 08:31:23.411848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.307 [2024-04-17 08:31:23.411857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.307 [2024-04-17 08:31:23.416108] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.307 [2024-04-17 08:31:23.416147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.307 [2024-04-17 08:31:23.416157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:50.307 [2024-04-17 08:31:23.420410] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.307 [2024-04-17 08:31:23.420450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.307 [2024-04-17 08:31:23.420458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:50.307 [2024-04-17 08:31:23.424803] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.307 [2024-04-17 08:31:23.424847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.307 [2024-04-17 08:31:23.424857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:50.307 [2024-04-17 08:31:23.429149] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.307 [2024-04-17 08:31:23.429193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.307 [2024-04-17 08:31:23.429204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.307 [2024-04-17 08:31:23.433421] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.307 [2024-04-17 08:31:23.433464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.307 [2024-04-17 08:31:23.433473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:50.307 [2024-04-17 08:31:23.437673] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.307 [2024-04-17 08:31:23.437711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.307 [2024-04-17 08:31:23.437721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:50.307 [2024-04-17 08:31:23.441936] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.307 [2024-04-17 08:31:23.441975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.307 [2024-04-17 08:31:23.441984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:50.307 [2024-04-17 08:31:23.446263] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.307 [2024-04-17 08:31:23.446328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.307 [2024-04-17 08:31:23.446340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.307 [2024-04-17 08:31:23.450604] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.307 [2024-04-17 08:31:23.450647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.307 [2024-04-17 08:31:23.450656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:50.307 [2024-04-17 08:31:23.454886] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.307 [2024-04-17 08:31:23.454931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.307 [2024-04-17 08:31:23.454940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:50.307 [2024-04-17 08:31:23.459176] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.307 [2024-04-17 08:31:23.459222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.307 [2024-04-17 08:31:23.459232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:50.307 [2024-04-17 08:31:23.463459] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.307 [2024-04-17 08:31:23.463506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.307 [2024-04-17 08:31:23.463516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.307 [2024-04-17 08:31:23.467704] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.307 [2024-04-17 08:31:23.467749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.307 [2024-04-17 08:31:23.467759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:50.307 [2024-04-17 08:31:23.471997] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.307 [2024-04-17 08:31:23.472042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.307 [2024-04-17 08:31:23.472052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:50.307 [2024-04-17 08:31:23.476380] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.307 [2024-04-17 08:31:23.476423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.307 [2024-04-17 08:31:23.476432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:50.307 [2024-04-17 08:31:23.480624] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.307 [2024-04-17 08:31:23.480665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.307 [2024-04-17 08:31:23.480674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.307 [2024-04-17 08:31:23.484960] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.307 [2024-04-17 08:31:23.485001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.307 [2024-04-17 08:31:23.485011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:50.307 [2024-04-17 08:31:23.489285] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.308 [2024-04-17 08:31:23.489338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.308 [2024-04-17 08:31:23.489348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:50.308 [2024-04-17 08:31:23.493570] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.308 [2024-04-17 08:31:23.493609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.308 [2024-04-17 08:31:23.493619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:50.308 [2024-04-17 08:31:23.497860] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.308 [2024-04-17 08:31:23.497904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.308 [2024-04-17 08:31:23.497914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.308 [2024-04-17 08:31:23.502156] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.308 [2024-04-17 08:31:23.502197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.308 [2024-04-17 08:31:23.502206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:50.308 [2024-04-17 08:31:23.506428] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.308 [2024-04-17 08:31:23.506466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.308 [2024-04-17 08:31:23.506475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:50.308 [2024-04-17 08:31:23.510700] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.308 [2024-04-17 08:31:23.510738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.308 [2024-04-17 08:31:23.510747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:50.308 [2024-04-17 08:31:23.515001] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.308 [2024-04-17 08:31:23.515040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.308 [2024-04-17 08:31:23.515049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.308 [2024-04-17 08:31:23.519332] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.308 [2024-04-17 08:31:23.519371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.308 [2024-04-17 08:31:23.519380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:50.308 [2024-04-17 08:31:23.523585] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.308 [2024-04-17 08:31:23.523625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.308 [2024-04-17 08:31:23.523635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:50.308 [2024-04-17 08:31:23.527927] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.308 [2024-04-17 08:31:23.527966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.308 [2024-04-17 08:31:23.527975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:50.308 [2024-04-17 08:31:23.532175] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.308 [2024-04-17 08:31:23.532214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.308 [2024-04-17 08:31:23.532223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.308 [2024-04-17 08:31:23.536431] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.308 [2024-04-17 08:31:23.536467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.308 [2024-04-17 08:31:23.536475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:50.308 [2024-04-17 08:31:23.540614] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.308 [2024-04-17 08:31:23.540652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.308 [2024-04-17 08:31:23.540661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:50.308 [2024-04-17 08:31:23.544824] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.308 [2024-04-17 08:31:23.544861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.308 [2024-04-17 08:31:23.544869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:50.308 [2024-04-17 08:31:23.549023] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.308 [2024-04-17 08:31:23.549060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.308 [2024-04-17 08:31:23.549069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.308 [2024-04-17 08:31:23.553239] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.308 [2024-04-17 08:31:23.553276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.308 [2024-04-17 08:31:23.553285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:50.308 [2024-04-17 08:31:23.557485] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.308 [2024-04-17 08:31:23.557522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.308 [2024-04-17 08:31:23.557532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:50.308 [2024-04-17 08:31:23.561719] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.308 [2024-04-17 08:31:23.561757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.308 [2024-04-17 08:31:23.561766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:50.308 [2024-04-17 08:31:23.565958] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.308 [2024-04-17 08:31:23.565997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.308 [2024-04-17 08:31:23.566006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.308 [2024-04-17 08:31:23.570212] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.308 [2024-04-17 08:31:23.570255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.308 [2024-04-17 08:31:23.570264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:50.308 [2024-04-17 08:31:23.574488] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.308 [2024-04-17 08:31:23.574527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.308 [2024-04-17 08:31:23.574536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:50.308 [2024-04-17 08:31:23.578777] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.308 [2024-04-17 08:31:23.578819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.308 [2024-04-17 08:31:23.578829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:50.308 [2024-04-17 08:31:23.583035] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.308 [2024-04-17 08:31:23.583079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.308 [2024-04-17 08:31:23.583089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.308 [2024-04-17 08:31:23.587362] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.308 [2024-04-17 08:31:23.587405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.308 [2024-04-17 08:31:23.587414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:50.308 [2024-04-17 08:31:23.591626] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.308 [2024-04-17 08:31:23.591670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.308 [2024-04-17 08:31:23.591679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:50.308 [2024-04-17 08:31:23.595877] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.308 [2024-04-17 08:31:23.595922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.308 [2024-04-17 08:31:23.595932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:50.308 [2024-04-17 08:31:23.600126] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.308 [2024-04-17 08:31:23.600167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.308 [2024-04-17 08:31:23.600176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.308 [2024-04-17 08:31:23.604407] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.309 [2024-04-17 08:31:23.604449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.309 [2024-04-17 08:31:23.604458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:50.309 [2024-04-17 08:31:23.608684] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.309 [2024-04-17 08:31:23.608730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.309 [2024-04-17 08:31:23.608740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:50.309 [2024-04-17 08:31:23.612964] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.309 [2024-04-17 08:31:23.613012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.309 [2024-04-17 08:31:23.613022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:50.309 [2024-04-17 08:31:23.617325] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.309 [2024-04-17 08:31:23.617379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.309 [2024-04-17 08:31:23.617390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.309 [2024-04-17 08:31:23.621623] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.309 [2024-04-17 08:31:23.621667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.309 [2024-04-17 08:31:23.621677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:50.309 [2024-04-17 08:31:23.625953] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.309 [2024-04-17 08:31:23.625997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.309 [2024-04-17 08:31:23.626008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:50.309 [2024-04-17 08:31:23.630255] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.309 [2024-04-17 08:31:23.630317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.309 [2024-04-17 08:31:23.630327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:50.309 [2024-04-17 08:31:23.634578] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.309 [2024-04-17 08:31:23.634635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.309 [2024-04-17 08:31:23.634645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.578 [2024-04-17 08:31:23.638943] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.578 [2024-04-17 08:31:23.638992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.578 [2024-04-17 08:31:23.639002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:50.578 [2024-04-17 08:31:23.643337] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.578 [2024-04-17 08:31:23.643383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.578 [2024-04-17 08:31:23.643392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:50.578 [2024-04-17 08:31:23.647673] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.578 [2024-04-17 08:31:23.647725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.578 [2024-04-17 08:31:23.647735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:50.578 [2024-04-17 08:31:23.652026] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.578 [2024-04-17 08:31:23.652078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.578 [2024-04-17 08:31:23.652088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.578 [2024-04-17 08:31:23.656353] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.578 [2024-04-17 08:31:23.656402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.578 [2024-04-17 08:31:23.656411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:50.578 [2024-04-17 08:31:23.660698] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.578 [2024-04-17 08:31:23.660746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.578 [2024-04-17 08:31:23.660756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:50.578 [2024-04-17 08:31:23.665030] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.578 [2024-04-17 08:31:23.665077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.578 [2024-04-17 08:31:23.665087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:50.578 [2024-04-17 08:31:23.669366] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.578 [2024-04-17 08:31:23.669411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.578 [2024-04-17 08:31:23.669421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.578 [2024-04-17 08:31:23.673682] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.578 [2024-04-17 08:31:23.673730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.578 [2024-04-17 08:31:23.673740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:50.578 [2024-04-17 08:31:23.677927] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.578 [2024-04-17 08:31:23.677972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.578 [2024-04-17 08:31:23.677981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:50.578 [2024-04-17 08:31:23.682151] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.578 [2024-04-17 08:31:23.682191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.578 [2024-04-17 08:31:23.682200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:50.578 [2024-04-17 08:31:23.686421] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.578 [2024-04-17 08:31:23.686463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.578 [2024-04-17 08:31:23.686472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.578 [2024-04-17 08:31:23.690658] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.578 [2024-04-17 08:31:23.690701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.578 [2024-04-17 08:31:23.690710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:50.578 [2024-04-17 08:31:23.694962] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.578 [2024-04-17 08:31:23.695003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.578 [2024-04-17 08:31:23.695013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:50.578 [2024-04-17 08:31:23.699268] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.578 [2024-04-17 08:31:23.699324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.578 [2024-04-17 08:31:23.699335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:50.578 [2024-04-17 08:31:23.703563] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.578 [2024-04-17 08:31:23.703604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.578 [2024-04-17 08:31:23.703614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.578 [2024-04-17 08:31:23.707802] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.578 [2024-04-17 08:31:23.707846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.579 [2024-04-17 08:31:23.707856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:50.579 [2024-04-17 08:31:23.712074] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.579 [2024-04-17 08:31:23.712115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.579 [2024-04-17 08:31:23.712125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:50.579 [2024-04-17 08:31:23.716334] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.579 [2024-04-17 08:31:23.716370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.579 [2024-04-17 08:31:23.716380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:50.579 [2024-04-17 08:31:23.720616] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.579 [2024-04-17 08:31:23.720658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.579 [2024-04-17 08:31:23.720667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.579 [2024-04-17 08:31:23.724872] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.579 [2024-04-17 08:31:23.724913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.579 [2024-04-17 08:31:23.724923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:50.579 [2024-04-17 08:31:23.729134] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.579 [2024-04-17 08:31:23.729174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.579 [2024-04-17 08:31:23.729184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:50.579 [2024-04-17 08:31:23.733378] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.579 [2024-04-17 08:31:23.733418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.579 [2024-04-17 08:31:23.733427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:50.579 [2024-04-17 08:31:23.737645] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.579 [2024-04-17 08:31:23.737697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.579 [2024-04-17 08:31:23.737707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.579 [2024-04-17 08:31:23.741992] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.579 [2024-04-17 08:31:23.742039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.579 [2024-04-17 08:31:23.742049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:50.579 [2024-04-17 08:31:23.746264] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.579 [2024-04-17 08:31:23.746325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.579 [2024-04-17 08:31:23.746335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:50.579 [2024-04-17 08:31:23.750602] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.579 [2024-04-17 08:31:23.750653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.579 [2024-04-17 08:31:23.750662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:50.579 [2024-04-17 08:31:23.754868] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.579 [2024-04-17 08:31:23.754912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.579 [2024-04-17 08:31:23.754922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.579 [2024-04-17 08:31:23.759145] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.579 [2024-04-17 08:31:23.759193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.579 [2024-04-17 08:31:23.759203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:50.579 [2024-04-17 08:31:23.763414] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.579 [2024-04-17 08:31:23.763454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.579 [2024-04-17 08:31:23.763463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:50.579 [2024-04-17 08:31:23.767673] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.579 [2024-04-17 08:31:23.767718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.579 [2024-04-17 08:31:23.767727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:50.579 [2024-04-17 08:31:23.771978] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.579 [2024-04-17 08:31:23.772025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.579 [2024-04-17 08:31:23.772034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.579 [2024-04-17 08:31:23.776381] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.579 [2024-04-17 08:31:23.776427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.579 [2024-04-17 08:31:23.776437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:50.579 [2024-04-17 08:31:23.780778] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.579 [2024-04-17 08:31:23.780824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.579 [2024-04-17 08:31:23.780833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:50.579 [2024-04-17 08:31:23.785032] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.579 [2024-04-17 08:31:23.785077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.579 [2024-04-17 08:31:23.785086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:50.579 [2024-04-17 08:31:23.789316] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.579 [2024-04-17 08:31:23.789360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.579 [2024-04-17 08:31:23.789369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.579 [2024-04-17 08:31:23.793487] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.579 [2024-04-17 08:31:23.793529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.579 [2024-04-17 08:31:23.793537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:50.579 [2024-04-17 08:31:23.797683] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.579 [2024-04-17 08:31:23.797727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.579 [2024-04-17 08:31:23.797737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:50.579 [2024-04-17 08:31:23.801895] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.579 [2024-04-17 08:31:23.801943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.579 [2024-04-17 08:31:23.801953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:50.579 [2024-04-17 08:31:23.806281] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.579 [2024-04-17 08:31:23.806339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.579 [2024-04-17 08:31:23.806350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.579 [2024-04-17 08:31:23.810691] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.579 [2024-04-17 08:31:23.810742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.579 [2024-04-17 08:31:23.810751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:50.579 [2024-04-17 08:31:23.815027] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.579 [2024-04-17 08:31:23.815079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.579 [2024-04-17 08:31:23.815090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:50.579 [2024-04-17 08:31:23.819472] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.579 [2024-04-17 08:31:23.819528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.579 [2024-04-17 08:31:23.819539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:50.579 [2024-04-17 08:31:23.823808] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.580 [2024-04-17 08:31:23.823860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.580 [2024-04-17 08:31:23.823870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.580 [2024-04-17 08:31:23.828092] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.580 [2024-04-17 08:31:23.828142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.580 [2024-04-17 08:31:23.828152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:50.580 [2024-04-17 08:31:23.832412] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.580 [2024-04-17 08:31:23.832461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.580 [2024-04-17 08:31:23.832471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:50.580 [2024-04-17 08:31:23.836746] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.580 [2024-04-17 08:31:23.836795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.580 [2024-04-17 08:31:23.836805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:50.580 [2024-04-17 08:31:23.841187] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.580 [2024-04-17 08:31:23.841234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.580 [2024-04-17 08:31:23.841244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.580 [2024-04-17 08:31:23.845513] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.580 [2024-04-17 08:31:23.845559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.580 [2024-04-17 08:31:23.845568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:50.580 [2024-04-17 08:31:23.849815] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.580 [2024-04-17 08:31:23.849862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.580 [2024-04-17 08:31:23.849871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:50.580 [2024-04-17 08:31:23.854149] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.580 [2024-04-17 08:31:23.854192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.580 [2024-04-17 08:31:23.854202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:50.580 [2024-04-17 08:31:23.858793] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.580 [2024-04-17 08:31:23.858860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.580 [2024-04-17 08:31:23.858876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.580 [2024-04-17 08:31:23.863192] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.580 [2024-04-17 08:31:23.863245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.580 [2024-04-17 08:31:23.863256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:50.580 [2024-04-17 08:31:23.867555] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.580 [2024-04-17 08:31:23.867605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.580 [2024-04-17 08:31:23.867615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:50.580 [2024-04-17 08:31:23.871899] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.580 [2024-04-17 08:31:23.871948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.580 [2024-04-17 08:31:23.871958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:50.580 [2024-04-17 08:31:23.876205] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.580 [2024-04-17 08:31:23.876250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.580 [2024-04-17 08:31:23.876259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.580 [2024-04-17 08:31:23.880488] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.580 [2024-04-17 08:31:23.880528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.580 [2024-04-17 08:31:23.880538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:50.580 [2024-04-17 08:31:23.884794] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.580 [2024-04-17 08:31:23.884841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.580 [2024-04-17 08:31:23.884850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:50.580 [2024-04-17 08:31:23.889056] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.580 [2024-04-17 08:31:23.889097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.580 [2024-04-17 08:31:23.889107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:50.580 [2024-04-17 08:31:23.893326] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.580 [2024-04-17 08:31:23.893366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.580 [2024-04-17 08:31:23.893377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.580 [2024-04-17 08:31:23.897519] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.580 [2024-04-17 08:31:23.897563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.580 [2024-04-17 08:31:23.897572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:50.580 [2024-04-17 08:31:23.901847] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.580 [2024-04-17 08:31:23.901893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.580 [2024-04-17 08:31:23.901902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:50.580 [2024-04-17 08:31:23.906118] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.580 [2024-04-17 08:31:23.906162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.580 [2024-04-17 08:31:23.906171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:50.844 [2024-04-17 08:31:23.910410] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.844 [2024-04-17 08:31:23.910454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.844 [2024-04-17 08:31:23.910463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.844 [2024-04-17 08:31:23.914764] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.844 [2024-04-17 08:31:23.914817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.844 [2024-04-17 08:31:23.914826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:50.844 [2024-04-17 08:31:23.919054] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.844 [2024-04-17 08:31:23.919106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.844 [2024-04-17 08:31:23.919114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:50.844 [2024-04-17 08:31:23.923454] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.844 [2024-04-17 08:31:23.923503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.844 [2024-04-17 08:31:23.923512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:50.844 [2024-04-17 08:31:23.927723] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.844 [2024-04-17 08:31:23.927770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.844 [2024-04-17 08:31:23.927779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.844 [2024-04-17 08:31:23.932049] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.844 [2024-04-17 08:31:23.932101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.844 [2024-04-17 08:31:23.932111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:50.844 [2024-04-17 08:31:23.936376] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.844 [2024-04-17 08:31:23.936421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.844 [2024-04-17 08:31:23.936431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:50.844 [2024-04-17 08:31:23.940658] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.844 [2024-04-17 08:31:23.940702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.844 [2024-04-17 08:31:23.940710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:50.844 [2024-04-17 08:31:23.945107] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.844 [2024-04-17 08:31:23.945153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.844 [2024-04-17 08:31:23.945163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.844 [2024-04-17 08:31:23.949408] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.844 [2024-04-17 08:31:23.949455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.844 [2024-04-17 08:31:23.949465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:50.844 [2024-04-17 08:31:23.953772] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.844 [2024-04-17 08:31:23.953816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.844 [2024-04-17 08:31:23.953825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:50.844 [2024-04-17 08:31:23.958117] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.844 [2024-04-17 08:31:23.958161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.844 [2024-04-17 08:31:23.958171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:50.844 [2024-04-17 08:31:23.962617] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.844 [2024-04-17 08:31:23.962681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.844 [2024-04-17 08:31:23.962695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.844 [2024-04-17 08:31:23.967117] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.844 [2024-04-17 08:31:23.967169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.844 [2024-04-17 08:31:23.967179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:50.844 [2024-04-17 08:31:23.971491] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.844 [2024-04-17 08:31:23.971545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.845 [2024-04-17 08:31:23.971556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:50.845 [2024-04-17 08:31:23.975827] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.845 [2024-04-17 08:31:23.975874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.845 [2024-04-17 08:31:23.975884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:50.845 [2024-04-17 08:31:23.980136] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.845 [2024-04-17 08:31:23.980186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.845 [2024-04-17 08:31:23.980196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.845 [2024-04-17 08:31:23.984501] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.845 [2024-04-17 08:31:23.984553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.845 [2024-04-17 08:31:23.984563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:50.845 [2024-04-17 08:31:23.988806] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.845 [2024-04-17 08:31:23.988854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.845 [2024-04-17 08:31:23.988864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:50.845 [2024-04-17 08:31:23.993147] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.845 [2024-04-17 08:31:23.993193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.845 [2024-04-17 08:31:23.993203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:50.845 [2024-04-17 08:31:23.997444] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.845 [2024-04-17 08:31:23.997487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.845 [2024-04-17 08:31:23.997499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.845 [2024-04-17 08:31:24.001708] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.845 [2024-04-17 08:31:24.001751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.845 [2024-04-17 08:31:24.001761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:50.845 [2024-04-17 08:31:24.006037] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.845 [2024-04-17 08:31:24.006082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.845 [2024-04-17 08:31:24.006093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:50.845 [2024-04-17 08:31:24.010324] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.845 [2024-04-17 08:31:24.010364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.845 [2024-04-17 08:31:24.010373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:50.845 [2024-04-17 08:31:24.014520] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.845 [2024-04-17 08:31:24.014561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.845 [2024-04-17 08:31:24.014569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.845 [2024-04-17 08:31:24.018745] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.845 [2024-04-17 08:31:24.018785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.845 [2024-04-17 08:31:24.018794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:50.845 [2024-04-17 08:31:24.023008] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.845 [2024-04-17 08:31:24.023051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.845 [2024-04-17 08:31:24.023061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:50.845 [2024-04-17 08:31:24.027296] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.845 [2024-04-17 08:31:24.027350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.845 [2024-04-17 08:31:24.027360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:50.845 [2024-04-17 08:31:24.031573] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.845 [2024-04-17 08:31:24.031619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.845 [2024-04-17 08:31:24.031629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.845 [2024-04-17 08:31:24.035907] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.845 [2024-04-17 08:31:24.035954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.845 [2024-04-17 08:31:24.035963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:50.845 [2024-04-17 08:31:24.040315] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.845 [2024-04-17 08:31:24.040366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.845 [2024-04-17 08:31:24.040376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:50.845 [2024-04-17 08:31:24.044666] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.845 [2024-04-17 08:31:24.044716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.845 [2024-04-17 08:31:24.044726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:50.845 [2024-04-17 08:31:24.049062] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.845 [2024-04-17 08:31:24.049108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.845 [2024-04-17 08:31:24.049117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.845 [2024-04-17 08:31:24.053411] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.845 [2024-04-17 08:31:24.053458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.845 [2024-04-17 08:31:24.053468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:50.845 [2024-04-17 08:31:24.057689] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.845 [2024-04-17 08:31:24.057737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.845 [2024-04-17 08:31:24.057747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:50.845 [2024-04-17 08:31:24.061834] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.845 [2024-04-17 08:31:24.061880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.845 [2024-04-17 08:31:24.061889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:50.845 [2024-04-17 08:31:24.066115] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.845 [2024-04-17 08:31:24.066157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.845 [2024-04-17 08:31:24.066167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.845 [2024-04-17 08:31:24.070360] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.845 [2024-04-17 08:31:24.070398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.845 [2024-04-17 08:31:24.070407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:50.845 [2024-04-17 08:31:24.074699] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.845 [2024-04-17 08:31:24.074734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.845 [2024-04-17 08:31:24.074743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:50.845 [2024-04-17 08:31:24.078958] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.845 [2024-04-17 08:31:24.079000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.845 [2024-04-17 08:31:24.079009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:50.845 [2024-04-17 08:31:24.083206] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.845 [2024-04-17 08:31:24.083251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.845 [2024-04-17 08:31:24.083261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.845 [2024-04-17 08:31:24.087459] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.845 [2024-04-17 08:31:24.087501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.846 [2024-04-17 08:31:24.087510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:50.846 [2024-04-17 08:31:24.091737] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.846 [2024-04-17 08:31:24.091782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.846 [2024-04-17 08:31:24.091792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:50.846 [2024-04-17 08:31:24.096074] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.846 [2024-04-17 08:31:24.096120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.846 [2024-04-17 08:31:24.096129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:50.846 [2024-04-17 08:31:24.100404] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.846 [2024-04-17 08:31:24.100450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.846 [2024-04-17 08:31:24.100460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.846 [2024-04-17 08:31:24.104779] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.846 [2024-04-17 08:31:24.104828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.846 [2024-04-17 08:31:24.104837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:50.846 [2024-04-17 08:31:24.109148] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.846 [2024-04-17 08:31:24.109197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.846 [2024-04-17 08:31:24.109207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:50.846 [2024-04-17 08:31:24.113468] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.846 [2024-04-17 08:31:24.113517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.846 [2024-04-17 08:31:24.113526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:50.846 [2024-04-17 08:31:24.117760] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.846 [2024-04-17 08:31:24.117809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.846 [2024-04-17 08:31:24.117819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.846 [2024-04-17 08:31:24.122111] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.846 [2024-04-17 08:31:24.122160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.846 [2024-04-17 08:31:24.122169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:50.846 [2024-04-17 08:31:24.126500] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.846 [2024-04-17 08:31:24.126548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.846 [2024-04-17 08:31:24.126558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:50.846 [2024-04-17 08:31:24.130872] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.846 [2024-04-17 08:31:24.130922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.846 [2024-04-17 08:31:24.130932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:50.846 [2024-04-17 08:31:24.135374] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.846 [2024-04-17 08:31:24.135427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.846 [2024-04-17 08:31:24.135437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.846 [2024-04-17 08:31:24.139680] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.846 [2024-04-17 08:31:24.139733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.846 [2024-04-17 08:31:24.139743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:50.846 [2024-04-17 08:31:24.143933] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.846 [2024-04-17 08:31:24.143985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.846 [2024-04-17 08:31:24.143995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:50.846 [2024-04-17 08:31:24.148412] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.846 [2024-04-17 08:31:24.148461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.846 [2024-04-17 08:31:24.148471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:50.846 [2024-04-17 08:31:24.152877] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.846 [2024-04-17 08:31:24.152928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.846 [2024-04-17 08:31:24.152938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.846 [2024-04-17 08:31:24.157269] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.846 [2024-04-17 08:31:24.157331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.846 [2024-04-17 08:31:24.157342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:50.846 [2024-04-17 08:31:24.161690] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.846 [2024-04-17 08:31:24.161741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.846 [2024-04-17 08:31:24.161751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:50.846 [2024-04-17 08:31:24.166150] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.846 [2024-04-17 08:31:24.166204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.846 [2024-04-17 08:31:24.166214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:50.846 [2024-04-17 08:31:24.170498] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.846 [2024-04-17 08:31:24.170549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.846 [2024-04-17 08:31:24.170559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.846 [2024-04-17 08:31:24.174788] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:50.846 [2024-04-17 08:31:24.174839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.846 [2024-04-17 08:31:24.174848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.105 [2024-04-17 08:31:24.179102] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.105 [2024-04-17 08:31:24.179150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.106 [2024-04-17 08:31:24.179159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.106 [2024-04-17 08:31:24.183498] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.106 [2024-04-17 08:31:24.183543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.106 [2024-04-17 08:31:24.183552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:51.106 [2024-04-17 08:31:24.188020] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.106 [2024-04-17 08:31:24.188076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.106 [2024-04-17 08:31:24.188087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.106 [2024-04-17 08:31:24.192422] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.106 [2024-04-17 08:31:24.192467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.106 [2024-04-17 08:31:24.192477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.106 [2024-04-17 08:31:24.196733] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.106 [2024-04-17 08:31:24.196779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.106 [2024-04-17 08:31:24.196789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.106 [2024-04-17 08:31:24.200978] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.106 [2024-04-17 08:31:24.201021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.106 [2024-04-17 08:31:24.201030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:51.106 [2024-04-17 08:31:24.205307] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.106 [2024-04-17 08:31:24.205362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.106 [2024-04-17 08:31:24.205372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.106 [2024-04-17 08:31:24.209688] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.106 [2024-04-17 08:31:24.209742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.106 [2024-04-17 08:31:24.209751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.106 [2024-04-17 08:31:24.213994] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.106 [2024-04-17 08:31:24.214035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.106 [2024-04-17 08:31:24.214044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.106 [2024-04-17 08:31:24.218417] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.106 [2024-04-17 08:31:24.218459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.106 [2024-04-17 08:31:24.218469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:51.106 [2024-04-17 08:31:24.222775] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.106 [2024-04-17 08:31:24.222817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.106 [2024-04-17 08:31:24.222826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.106 [2024-04-17 08:31:24.227130] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.106 [2024-04-17 08:31:24.227177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.106 [2024-04-17 08:31:24.227186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.106 [2024-04-17 08:31:24.231591] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.106 [2024-04-17 08:31:24.231643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.106 [2024-04-17 08:31:24.231653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.106 [2024-04-17 08:31:24.235905] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.106 [2024-04-17 08:31:24.235957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.106 [2024-04-17 08:31:24.235968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:51.106 [2024-04-17 08:31:24.240202] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.106 [2024-04-17 08:31:24.240247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.106 [2024-04-17 08:31:24.240256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.106 [2024-04-17 08:31:24.244414] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.106 [2024-04-17 08:31:24.244454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.106 [2024-04-17 08:31:24.244463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.106 [2024-04-17 08:31:24.248813] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.106 [2024-04-17 08:31:24.248860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.106 [2024-04-17 08:31:24.248870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.106 [2024-04-17 08:31:24.253063] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.106 [2024-04-17 08:31:24.253105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.106 [2024-04-17 08:31:24.253114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:51.106 [2024-04-17 08:31:24.257466] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.106 [2024-04-17 08:31:24.257507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.106 [2024-04-17 08:31:24.257517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.106 [2024-04-17 08:31:24.261822] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.106 [2024-04-17 08:31:24.261867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.106 [2024-04-17 08:31:24.261876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.106 [2024-04-17 08:31:24.266237] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.106 [2024-04-17 08:31:24.266284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.106 [2024-04-17 08:31:24.266294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.106 [2024-04-17 08:31:24.270585] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.106 [2024-04-17 08:31:24.270646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.106 [2024-04-17 08:31:24.270656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:51.106 [2024-04-17 08:31:24.275022] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.106 [2024-04-17 08:31:24.275075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.106 [2024-04-17 08:31:24.275084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.106 [2024-04-17 08:31:24.279372] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.106 [2024-04-17 08:31:24.279420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.106 [2024-04-17 08:31:24.279430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.106 [2024-04-17 08:31:24.283719] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.106 [2024-04-17 08:31:24.283773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.106 [2024-04-17 08:31:24.283784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.106 [2024-04-17 08:31:24.288154] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.106 [2024-04-17 08:31:24.288206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.106 [2024-04-17 08:31:24.288217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:51.106 [2024-04-17 08:31:24.292490] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.106 [2024-04-17 08:31:24.292539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.106 [2024-04-17 08:31:24.292549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.106 [2024-04-17 08:31:24.296778] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.106 [2024-04-17 08:31:24.296824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.106 [2024-04-17 08:31:24.296833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.106 [2024-04-17 08:31:24.301056] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.106 [2024-04-17 08:31:24.301115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.106 [2024-04-17 08:31:24.301124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.106 [2024-04-17 08:31:24.305390] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.106 [2024-04-17 08:31:24.305439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.106 [2024-04-17 08:31:24.305449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:51.106 [2024-04-17 08:31:24.309764] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.106 [2024-04-17 08:31:24.309813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.106 [2024-04-17 08:31:24.309823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.106 [2024-04-17 08:31:24.314119] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.106 [2024-04-17 08:31:24.314170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.106 [2024-04-17 08:31:24.314179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.106 [2024-04-17 08:31:24.318461] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.106 [2024-04-17 08:31:24.318508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.106 [2024-04-17 08:31:24.318518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.106 [2024-04-17 08:31:24.322752] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.106 [2024-04-17 08:31:24.322795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.106 [2024-04-17 08:31:24.322804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:51.106 [2024-04-17 08:31:24.327109] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.106 [2024-04-17 08:31:24.327156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.106 [2024-04-17 08:31:24.327165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.106 [2024-04-17 08:31:24.331397] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.106 [2024-04-17 08:31:24.331442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.106 [2024-04-17 08:31:24.331452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.106 [2024-04-17 08:31:24.335638] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.106 [2024-04-17 08:31:24.335684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.106 [2024-04-17 08:31:24.335693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.106 [2024-04-17 08:31:24.339962] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.106 [2024-04-17 08:31:24.340008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.106 [2024-04-17 08:31:24.340018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:51.106 [2024-04-17 08:31:24.344293] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.106 [2024-04-17 08:31:24.344346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.106 [2024-04-17 08:31:24.344355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.106 [2024-04-17 08:31:24.348656] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.106 [2024-04-17 08:31:24.348703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.106 [2024-04-17 08:31:24.348712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.106 [2024-04-17 08:31:24.353002] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.106 [2024-04-17 08:31:24.353050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.106 [2024-04-17 08:31:24.353060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.106 [2024-04-17 08:31:24.357323] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.106 [2024-04-17 08:31:24.357367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.106 [2024-04-17 08:31:24.357377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:51.106 [2024-04-17 08:31:24.361753] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.106 [2024-04-17 08:31:24.361808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.106 [2024-04-17 08:31:24.361818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.106 [2024-04-17 08:31:24.366116] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.106 [2024-04-17 08:31:24.366163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.106 [2024-04-17 08:31:24.366173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.107 [2024-04-17 08:31:24.370501] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.107 [2024-04-17 08:31:24.370548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.107 [2024-04-17 08:31:24.370559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.107 [2024-04-17 08:31:24.374818] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.107 [2024-04-17 08:31:24.374866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.107 [2024-04-17 08:31:24.374876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:51.107 [2024-04-17 08:31:24.379268] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.107 [2024-04-17 08:31:24.379331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.107 [2024-04-17 08:31:24.379342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.107 [2024-04-17 08:31:24.383651] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.107 [2024-04-17 08:31:24.383702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.107 [2024-04-17 08:31:24.383712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.107 [2024-04-17 08:31:24.388019] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.107 [2024-04-17 08:31:24.388071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.107 [2024-04-17 08:31:24.388082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.107 [2024-04-17 08:31:24.392446] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.107 [2024-04-17 08:31:24.392499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.107 [2024-04-17 08:31:24.392509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:51.107 [2024-04-17 08:31:24.396813] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.107 [2024-04-17 08:31:24.396862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.107 [2024-04-17 08:31:24.396871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.107 [2024-04-17 08:31:24.401200] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.107 [2024-04-17 08:31:24.401251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.107 [2024-04-17 08:31:24.401262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.107 [2024-04-17 08:31:24.405513] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.107 [2024-04-17 08:31:24.405559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.107 [2024-04-17 08:31:24.405568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.107 [2024-04-17 08:31:24.409808] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.107 [2024-04-17 08:31:24.409852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.107 [2024-04-17 08:31:24.409862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:51.107 [2024-04-17 08:31:24.414139] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.107 [2024-04-17 08:31:24.414181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.107 [2024-04-17 08:31:24.414191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.107 [2024-04-17 08:31:24.418420] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.107 [2024-04-17 08:31:24.418462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.107 [2024-04-17 08:31:24.418473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.107 [2024-04-17 08:31:24.422783] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.107 [2024-04-17 08:31:24.422826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.107 [2024-04-17 08:31:24.422836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.107 [2024-04-17 08:31:24.427142] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.107 [2024-04-17 08:31:24.427191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.107 [2024-04-17 08:31:24.427202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:51.107 [2024-04-17 08:31:24.431470] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.107 [2024-04-17 08:31:24.431516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.107 [2024-04-17 08:31:24.431526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.107 [2024-04-17 08:31:24.435733] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.107 [2024-04-17 08:31:24.435776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.107 [2024-04-17 08:31:24.435786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.367 [2024-04-17 08:31:24.440027] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.367 [2024-04-17 08:31:24.440069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.367 [2024-04-17 08:31:24.440079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.367 [2024-04-17 08:31:24.444393] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.367 [2024-04-17 08:31:24.444433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.367 [2024-04-17 08:31:24.444442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:51.367 [2024-04-17 08:31:24.448653] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.367 [2024-04-17 08:31:24.448692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.367 [2024-04-17 08:31:24.448701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.367 [2024-04-17 08:31:24.452915] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.367 [2024-04-17 08:31:24.452955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.367 [2024-04-17 08:31:24.452965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.367 [2024-04-17 08:31:24.457236] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.367 [2024-04-17 08:31:24.457279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.367 [2024-04-17 08:31:24.457288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.367 [2024-04-17 08:31:24.461583] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.367 [2024-04-17 08:31:24.461625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.367 [2024-04-17 08:31:24.461634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:51.367 [2024-04-17 08:31:24.465896] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.367 [2024-04-17 08:31:24.465934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.367 [2024-04-17 08:31:24.465944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.367 [2024-04-17 08:31:24.470223] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.367 [2024-04-17 08:31:24.470271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.367 [2024-04-17 08:31:24.470281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.367 [2024-04-17 08:31:24.474481] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.367 [2024-04-17 08:31:24.474522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.367 [2024-04-17 08:31:24.474532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.368 [2024-04-17 08:31:24.478735] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.368 [2024-04-17 08:31:24.478776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.368 [2024-04-17 08:31:24.478785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:51.368 [2024-04-17 08:31:24.482997] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.368 [2024-04-17 08:31:24.483041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.368 [2024-04-17 08:31:24.483050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.368 [2024-04-17 08:31:24.487321] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.368 [2024-04-17 08:31:24.487363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.368 [2024-04-17 08:31:24.487373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.368 [2024-04-17 08:31:24.491642] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.368 [2024-04-17 08:31:24.491683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.368 [2024-04-17 08:31:24.491693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.368 [2024-04-17 08:31:24.495864] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.368 [2024-04-17 08:31:24.495908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.368 [2024-04-17 08:31:24.495916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:51.368 [2024-04-17 08:31:24.500126] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.368 [2024-04-17 08:31:24.500171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.368 [2024-04-17 08:31:24.500180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.368 [2024-04-17 08:31:24.504438] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.368 [2024-04-17 08:31:24.504480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.368 [2024-04-17 08:31:24.504489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.368 [2024-04-17 08:31:24.508687] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.368 [2024-04-17 08:31:24.508732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.368 [2024-04-17 08:31:24.508741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.368 [2024-04-17 08:31:24.513002] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.368 [2024-04-17 08:31:24.513048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.368 [2024-04-17 08:31:24.513057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:51.368 [2024-04-17 08:31:24.517260] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.368 [2024-04-17 08:31:24.517313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.368 [2024-04-17 08:31:24.517323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.368 [2024-04-17 08:31:24.521543] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.368 [2024-04-17 08:31:24.521585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.368 [2024-04-17 08:31:24.521594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.368 [2024-04-17 08:31:24.525759] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.368 [2024-04-17 08:31:24.525798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.368 [2024-04-17 08:31:24.525807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.368 [2024-04-17 08:31:24.530069] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.368 [2024-04-17 08:31:24.530109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.368 [2024-04-17 08:31:24.530119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:51.368 [2024-04-17 08:31:24.534351] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.368 [2024-04-17 08:31:24.534390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.368 [2024-04-17 08:31:24.534400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.368 [2024-04-17 08:31:24.538677] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.368 [2024-04-17 08:31:24.538717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.368 [2024-04-17 08:31:24.538727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.368 [2024-04-17 08:31:24.543011] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.368 [2024-04-17 08:31:24.543047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.368 [2024-04-17 08:31:24.543057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.368 [2024-04-17 08:31:24.547325] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.368 [2024-04-17 08:31:24.547362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.368 [2024-04-17 08:31:24.547372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:51.368 [2024-04-17 08:31:24.551732] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.368 [2024-04-17 08:31:24.551779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.368 [2024-04-17 08:31:24.551789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.368 [2024-04-17 08:31:24.556106] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.368 [2024-04-17 08:31:24.556149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.368 [2024-04-17 08:31:24.556159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.368 [2024-04-17 08:31:24.560410] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.368 [2024-04-17 08:31:24.560456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.368 [2024-04-17 08:31:24.560466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.368 [2024-04-17 08:31:24.564665] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.368 [2024-04-17 08:31:24.564714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.368 [2024-04-17 08:31:24.564723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:51.368 [2024-04-17 08:31:24.568942] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.368 [2024-04-17 08:31:24.568984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.368 [2024-04-17 08:31:24.568995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.368 [2024-04-17 08:31:24.573253] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.368 [2024-04-17 08:31:24.573294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.368 [2024-04-17 08:31:24.573315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.368 [2024-04-17 08:31:24.577508] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.368 [2024-04-17 08:31:24.577547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.368 [2024-04-17 08:31:24.577557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.368 [2024-04-17 08:31:24.581755] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.368 [2024-04-17 08:31:24.581793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.368 [2024-04-17 08:31:24.581802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:51.368 [2024-04-17 08:31:24.585992] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.368 [2024-04-17 08:31:24.586029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.368 [2024-04-17 08:31:24.586037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.368 [2024-04-17 08:31:24.590208] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.368 [2024-04-17 08:31:24.590243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.369 [2024-04-17 08:31:24.590252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.369 [2024-04-17 08:31:24.594482] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.369 [2024-04-17 08:31:24.594513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.369 [2024-04-17 08:31:24.594521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.369 [2024-04-17 08:31:24.598713] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.369 [2024-04-17 08:31:24.598746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.369 [2024-04-17 08:31:24.598755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:51.369 [2024-04-17 08:31:24.602979] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.369 [2024-04-17 08:31:24.603011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.369 [2024-04-17 08:31:24.603021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.369 [2024-04-17 08:31:24.607197] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.369 [2024-04-17 08:31:24.607230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.369 [2024-04-17 08:31:24.607239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.369 [2024-04-17 08:31:24.611423] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.369 [2024-04-17 08:31:24.611456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.369 [2024-04-17 08:31:24.611465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.369 [2024-04-17 08:31:24.615645] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.369 [2024-04-17 08:31:24.615681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.369 [2024-04-17 08:31:24.615691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:51.369 [2024-04-17 08:31:24.619874] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.369 [2024-04-17 08:31:24.619911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.369 [2024-04-17 08:31:24.619919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.369 [2024-04-17 08:31:24.624180] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.369 [2024-04-17 08:31:24.624217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.369 [2024-04-17 08:31:24.624227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.369 [2024-04-17 08:31:24.628533] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.369 [2024-04-17 08:31:24.628573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.369 [2024-04-17 08:31:24.628582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.369 [2024-04-17 08:31:24.632802] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.369 [2024-04-17 08:31:24.632845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.369 [2024-04-17 08:31:24.632855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:51.369 [2024-04-17 08:31:24.637080] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.369 [2024-04-17 08:31:24.637119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.369 [2024-04-17 08:31:24.637129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.369 [2024-04-17 08:31:24.641396] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.369 [2024-04-17 08:31:24.641439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.369 [2024-04-17 08:31:24.641448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.369 [2024-04-17 08:31:24.645716] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.369 [2024-04-17 08:31:24.645759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.369 [2024-04-17 08:31:24.645768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.369 [2024-04-17 08:31:24.650043] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.369 [2024-04-17 08:31:24.650083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.369 [2024-04-17 08:31:24.650093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:51.369 [2024-04-17 08:31:24.654387] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.369 [2024-04-17 08:31:24.654430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.369 [2024-04-17 08:31:24.654439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.369 [2024-04-17 08:31:24.658668] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.369 [2024-04-17 08:31:24.658712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.369 [2024-04-17 08:31:24.658721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.369 [2024-04-17 08:31:24.662989] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.369 [2024-04-17 08:31:24.663034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.369 [2024-04-17 08:31:24.663044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.369 [2024-04-17 08:31:24.667341] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.369 [2024-04-17 08:31:24.667387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.369 [2024-04-17 08:31:24.667396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:51.369 [2024-04-17 08:31:24.671731] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.369 [2024-04-17 08:31:24.671787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.369 [2024-04-17 08:31:24.671798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.369 [2024-04-17 08:31:24.676054] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.369 [2024-04-17 08:31:24.676097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.369 [2024-04-17 08:31:24.676107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.369 [2024-04-17 08:31:24.680389] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.369 [2024-04-17 08:31:24.680428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.369 [2024-04-17 08:31:24.680437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.369 [2024-04-17 08:31:24.684666] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.369 [2024-04-17 08:31:24.684706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.369 [2024-04-17 08:31:24.684715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:51.369 [2024-04-17 08:31:24.688961] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.369 [2024-04-17 08:31:24.689006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.369 [2024-04-17 08:31:24.689016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.369 [2024-04-17 08:31:24.693233] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.369 [2024-04-17 08:31:24.693276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.369 [2024-04-17 08:31:24.693286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.369 [2024-04-17 08:31:24.697501] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.369 [2024-04-17 08:31:24.697541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.369 [2024-04-17 08:31:24.697550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.632 [2024-04-17 08:31:24.701777] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.632 [2024-04-17 08:31:24.701816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.632 [2024-04-17 08:31:24.701826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:51.632 [2024-04-17 08:31:24.706081] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.632 [2024-04-17 08:31:24.706121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.632 [2024-04-17 08:31:24.706130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.632 [2024-04-17 08:31:24.710358] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.632 [2024-04-17 08:31:24.710397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.632 [2024-04-17 08:31:24.710406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.632 [2024-04-17 08:31:24.714689] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.632 [2024-04-17 08:31:24.714726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.632 [2024-04-17 08:31:24.714735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.632 [2024-04-17 08:31:24.718931] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.632 [2024-04-17 08:31:24.718969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.632 [2024-04-17 08:31:24.718979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:51.632 [2024-04-17 08:31:24.723188] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.632 [2024-04-17 08:31:24.723233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.632 [2024-04-17 08:31:24.723242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.632 [2024-04-17 08:31:24.727409] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.632 [2024-04-17 08:31:24.727448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.632 [2024-04-17 08:31:24.727457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.632 [2024-04-17 08:31:24.731613] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.632 [2024-04-17 08:31:24.731656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.632 [2024-04-17 08:31:24.731666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.632 [2024-04-17 08:31:24.735836] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.632 [2024-04-17 08:31:24.735881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.632 [2024-04-17 08:31:24.735891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:51.632 [2024-04-17 08:31:24.740117] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.632 [2024-04-17 08:31:24.740162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.632 [2024-04-17 08:31:24.740172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.632 [2024-04-17 08:31:24.744400] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.632 [2024-04-17 08:31:24.744436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.632 [2024-04-17 08:31:24.744445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.632 [2024-04-17 08:31:24.748644] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.632 [2024-04-17 08:31:24.748687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.632 [2024-04-17 08:31:24.748696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.632 [2024-04-17 08:31:24.752917] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.632 [2024-04-17 08:31:24.752961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.632 [2024-04-17 08:31:24.752969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:51.632 [2024-04-17 08:31:24.757285] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.632 [2024-04-17 08:31:24.757343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.632 [2024-04-17 08:31:24.757353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.632 [2024-04-17 08:31:24.761667] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.632 [2024-04-17 08:31:24.761716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.632 [2024-04-17 08:31:24.761725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.632 [2024-04-17 08:31:24.766060] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.632 [2024-04-17 08:31:24.766110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.632 [2024-04-17 08:31:24.766121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.632 [2024-04-17 08:31:24.770425] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.632 [2024-04-17 08:31:24.770472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.632 [2024-04-17 08:31:24.770481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:51.632 [2024-04-17 08:31:24.774732] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.632 [2024-04-17 08:31:24.774779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.632 [2024-04-17 08:31:24.774789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.632 [2024-04-17 08:31:24.779105] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.632 [2024-04-17 08:31:24.779150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.632 [2024-04-17 08:31:24.779159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.632 [2024-04-17 08:31:24.783419] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.632 [2024-04-17 08:31:24.783462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.632 [2024-04-17 08:31:24.783471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.632 [2024-04-17 08:31:24.787749] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.632 [2024-04-17 08:31:24.787797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.632 [2024-04-17 08:31:24.787807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:51.632 [2024-04-17 08:31:24.792077] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.633 [2024-04-17 08:31:24.792131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.633 [2024-04-17 08:31:24.792141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.633 [2024-04-17 08:31:24.796458] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.633 [2024-04-17 08:31:24.796505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.633 [2024-04-17 08:31:24.796514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.633 [2024-04-17 08:31:24.800802] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.633 [2024-04-17 08:31:24.800851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.633 [2024-04-17 08:31:24.800861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.633 [2024-04-17 08:31:24.805201] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.633 [2024-04-17 08:31:24.805252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.633 [2024-04-17 08:31:24.805261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:51.633 [2024-04-17 08:31:24.809606] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.633 [2024-04-17 08:31:24.809656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.633 [2024-04-17 08:31:24.809665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.633 [2024-04-17 08:31:24.813908] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.633 [2024-04-17 08:31:24.813961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.633 [2024-04-17 08:31:24.813971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.633 [2024-04-17 08:31:24.818276] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.633 [2024-04-17 08:31:24.818338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.633 [2024-04-17 08:31:24.818349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.633 [2024-04-17 08:31:24.822685] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.633 [2024-04-17 08:31:24.822733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.633 [2024-04-17 08:31:24.822742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:51.633 [2024-04-17 08:31:24.827061] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.633 [2024-04-17 08:31:24.827113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.633 [2024-04-17 08:31:24.827123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.633 [2024-04-17 08:31:24.831435] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.633 [2024-04-17 08:31:24.831488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.633 [2024-04-17 08:31:24.831498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.633 [2024-04-17 08:31:24.835865] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.633 [2024-04-17 08:31:24.835919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.633 [2024-04-17 08:31:24.835929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.633 [2024-04-17 08:31:24.840317] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.633 [2024-04-17 08:31:24.840366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.633 [2024-04-17 08:31:24.840375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:51.633 [2024-04-17 08:31:24.844596] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.633 [2024-04-17 08:31:24.844646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.633 [2024-04-17 08:31:24.844655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.633 [2024-04-17 08:31:24.848911] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.633 [2024-04-17 08:31:24.848958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.633 [2024-04-17 08:31:24.848967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.633 [2024-04-17 08:31:24.853338] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.633 [2024-04-17 08:31:24.853385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.633 [2024-04-17 08:31:24.853395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.633 [2024-04-17 08:31:24.857716] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.633 [2024-04-17 08:31:24.857765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.633 [2024-04-17 08:31:24.857775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:51.633 [2024-04-17 08:31:24.862020] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.633 [2024-04-17 08:31:24.862069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.633 [2024-04-17 08:31:24.862080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.633 [2024-04-17 08:31:24.866350] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.633 [2024-04-17 08:31:24.866391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.633 [2024-04-17 08:31:24.866401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.633 [2024-04-17 08:31:24.870631] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.633 [2024-04-17 08:31:24.870673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.633 [2024-04-17 08:31:24.870682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.633 [2024-04-17 08:31:24.874971] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.633 [2024-04-17 08:31:24.875017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.633 [2024-04-17 08:31:24.875027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:51.633 [2024-04-17 08:31:24.879319] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.633 [2024-04-17 08:31:24.879378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.633 [2024-04-17 08:31:24.879392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.633 [2024-04-17 08:31:24.883619] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.633 [2024-04-17 08:31:24.883665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.633 [2024-04-17 08:31:24.883675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.633 [2024-04-17 08:31:24.887865] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.633 [2024-04-17 08:31:24.887908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.633 [2024-04-17 08:31:24.887917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.633 [2024-04-17 08:31:24.892140] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.633 [2024-04-17 08:31:24.892181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.633 [2024-04-17 08:31:24.892190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:51.633 [2024-04-17 08:31:24.896485] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.633 [2024-04-17 08:31:24.896521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.633 [2024-04-17 08:31:24.896530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.633 [2024-04-17 08:31:24.900771] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.633 [2024-04-17 08:31:24.900812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.633 [2024-04-17 08:31:24.900822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.633 [2024-04-17 08:31:24.905064] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.633 [2024-04-17 08:31:24.905105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.633 [2024-04-17 08:31:24.905114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.633 [2024-04-17 08:31:24.909395] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.633 [2024-04-17 08:31:24.909437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.634 [2024-04-17 08:31:24.909447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:51.634 [2024-04-17 08:31:24.913634] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.634 [2024-04-17 08:31:24.913678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.634 [2024-04-17 08:31:24.913688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.634 [2024-04-17 08:31:24.917958] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.634 [2024-04-17 08:31:24.918008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.634 [2024-04-17 08:31:24.918017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.634 [2024-04-17 08:31:24.922362] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.634 [2024-04-17 08:31:24.922409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.634 [2024-04-17 08:31:24.922418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.634 [2024-04-17 08:31:24.926674] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.634 [2024-04-17 08:31:24.926719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.634 [2024-04-17 08:31:24.926729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:51.634 [2024-04-17 08:31:24.930991] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.634 [2024-04-17 08:31:24.931039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.634 [2024-04-17 08:31:24.931048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.634 [2024-04-17 08:31:24.935325] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.634 [2024-04-17 08:31:24.935367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.634 [2024-04-17 08:31:24.935376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.634 [2024-04-17 08:31:24.939600] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.634 [2024-04-17 08:31:24.939647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.634 [2024-04-17 08:31:24.939656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.634 [2024-04-17 08:31:24.943909] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.634 [2024-04-17 08:31:24.943959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.634 [2024-04-17 08:31:24.943968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:51.634 [2024-04-17 08:31:24.948317] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.634 [2024-04-17 08:31:24.948364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.634 [2024-04-17 08:31:24.948375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.634 [2024-04-17 08:31:24.952638] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.634 [2024-04-17 08:31:24.952685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.634 [2024-04-17 08:31:24.952695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.634 [2024-04-17 08:31:24.956957] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.634 [2024-04-17 08:31:24.957004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.634 [2024-04-17 08:31:24.957014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.634 [2024-04-17 08:31:24.961268] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.634 [2024-04-17 08:31:24.961327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.634 [2024-04-17 08:31:24.961338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:51.895 [2024-04-17 08:31:24.965624] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.895 [2024-04-17 08:31:24.965672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.895 [2024-04-17 08:31:24.965681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.895 [2024-04-17 08:31:24.969978] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.895 [2024-04-17 08:31:24.970024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.895 [2024-04-17 08:31:24.970034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.895 [2024-04-17 08:31:24.974776] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.895 [2024-04-17 08:31:24.974839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.895 [2024-04-17 08:31:24.974851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.895 [2024-04-17 08:31:24.979296] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.895 [2024-04-17 08:31:24.979366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.895 [2024-04-17 08:31:24.979376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:51.895 [2024-04-17 08:31:24.983712] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.895 [2024-04-17 08:31:24.983769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.895 [2024-04-17 08:31:24.983780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.895 [2024-04-17 08:31:24.988130] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.895 [2024-04-17 08:31:24.988189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.895 [2024-04-17 08:31:24.988201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.895 [2024-04-17 08:31:24.992493] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.895 [2024-04-17 08:31:24.992546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.895 [2024-04-17 08:31:24.992557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.895 [2024-04-17 08:31:24.996819] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.895 [2024-04-17 08:31:24.996869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.895 [2024-04-17 08:31:24.996879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:51.895 [2024-04-17 08:31:25.001135] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.895 [2024-04-17 08:31:25.001182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.895 [2024-04-17 08:31:25.001192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.895 [2024-04-17 08:31:25.005450] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.895 [2024-04-17 08:31:25.005493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.895 [2024-04-17 08:31:25.005503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.895 [2024-04-17 08:31:25.009762] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.895 [2024-04-17 08:31:25.009809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.895 [2024-04-17 08:31:25.009818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.895 [2024-04-17 08:31:25.014063] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.895 [2024-04-17 08:31:25.014111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.895 [2024-04-17 08:31:25.014120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:51.895 [2024-04-17 08:31:25.018676] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.895 [2024-04-17 08:31:25.018730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.895 [2024-04-17 08:31:25.018740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.895 [2024-04-17 08:31:25.023056] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.895 [2024-04-17 08:31:25.023108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.895 [2024-04-17 08:31:25.023118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.895 [2024-04-17 08:31:25.027593] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.895 [2024-04-17 08:31:25.027649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.895 [2024-04-17 08:31:25.027659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.895 [2024-04-17 08:31:25.031891] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.895 [2024-04-17 08:31:25.031941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.895 [2024-04-17 08:31:25.031951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:51.895 [2024-04-17 08:31:25.036196] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.895 [2024-04-17 08:31:25.036243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.895 [2024-04-17 08:31:25.036253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.895 [2024-04-17 08:31:25.040522] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.895 [2024-04-17 08:31:25.040567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.895 [2024-04-17 08:31:25.040576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.895 [2024-04-17 08:31:25.044884] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.896 [2024-04-17 08:31:25.044931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.896 [2024-04-17 08:31:25.044941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.896 [2024-04-17 08:31:25.049215] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.896 [2024-04-17 08:31:25.049263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.896 [2024-04-17 08:31:25.049273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:51.896 [2024-04-17 08:31:25.053552] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.896 [2024-04-17 08:31:25.053600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.896 [2024-04-17 08:31:25.053610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.896 [2024-04-17 08:31:25.057962] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.896 [2024-04-17 08:31:25.058015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.896 [2024-04-17 08:31:25.058026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.896 [2024-04-17 08:31:25.062350] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.896 [2024-04-17 08:31:25.062401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.896 [2024-04-17 08:31:25.062411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.896 [2024-04-17 08:31:25.066678] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.896 [2024-04-17 08:31:25.066723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.896 [2024-04-17 08:31:25.066733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:51.896 [2024-04-17 08:31:25.071009] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.896 [2024-04-17 08:31:25.071058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.896 [2024-04-17 08:31:25.071069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.896 [2024-04-17 08:31:25.075354] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.896 [2024-04-17 08:31:25.075397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.896 [2024-04-17 08:31:25.075406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.896 [2024-04-17 08:31:25.079647] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.896 [2024-04-17 08:31:25.079689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.896 [2024-04-17 08:31:25.079699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.896 [2024-04-17 08:31:25.083921] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.896 [2024-04-17 08:31:25.083963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.896 [2024-04-17 08:31:25.083973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:51.896 [2024-04-17 08:31:25.088274] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.896 [2024-04-17 08:31:25.088336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.896 [2024-04-17 08:31:25.088347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.896 [2024-04-17 08:31:25.092592] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.896 [2024-04-17 08:31:25.092639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.896 [2024-04-17 08:31:25.092648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.896 [2024-04-17 08:31:25.096856] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.896 [2024-04-17 08:31:25.096901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.896 [2024-04-17 08:31:25.096910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.896 [2024-04-17 08:31:25.101221] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.896 [2024-04-17 08:31:25.101267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.896 [2024-04-17 08:31:25.101277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:51.896 [2024-04-17 08:31:25.105580] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.896 [2024-04-17 08:31:25.105633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.896 [2024-04-17 08:31:25.105643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.896 [2024-04-17 08:31:25.109939] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.896 [2024-04-17 08:31:25.109990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.896 [2024-04-17 08:31:25.109999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.896 [2024-04-17 08:31:25.114234] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.896 [2024-04-17 08:31:25.114288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.896 [2024-04-17 08:31:25.114297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.896 [2024-04-17 08:31:25.118564] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.896 [2024-04-17 08:31:25.118621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.896 [2024-04-17 08:31:25.118631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:51.896 [2024-04-17 08:31:25.122858] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.896 [2024-04-17 08:31:25.122912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.896 [2024-04-17 08:31:25.122921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.896 [2024-04-17 08:31:25.127214] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.896 [2024-04-17 08:31:25.127258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.896 [2024-04-17 08:31:25.127268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.896 [2024-04-17 08:31:25.131548] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.896 [2024-04-17 08:31:25.131591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.896 [2024-04-17 08:31:25.131601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.896 [2024-04-17 08:31:25.135842] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.896 [2024-04-17 08:31:25.135886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.896 [2024-04-17 08:31:25.135895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:51.896 [2024-04-17 08:31:25.140211] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.896 [2024-04-17 08:31:25.140259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.896 [2024-04-17 08:31:25.140269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.896 [2024-04-17 08:31:25.144508] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.896 [2024-04-17 08:31:25.144551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.896 [2024-04-17 08:31:25.144561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.896 [2024-04-17 08:31:25.148813] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.896 [2024-04-17 08:31:25.148857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.896 [2024-04-17 08:31:25.148867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.896 [2024-04-17 08:31:25.153120] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.896 [2024-04-17 08:31:25.153168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.896 [2024-04-17 08:31:25.153179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:51.896 [2024-04-17 08:31:25.157510] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.896 [2024-04-17 08:31:25.157555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.896 [2024-04-17 08:31:25.157564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.896 [2024-04-17 08:31:25.161793] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.896 [2024-04-17 08:31:25.161839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.897 [2024-04-17 08:31:25.161848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.897 [2024-04-17 08:31:25.166137] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.897 [2024-04-17 08:31:25.166186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.897 [2024-04-17 08:31:25.166195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.897 [2024-04-17 08:31:25.170630] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.897 [2024-04-17 08:31:25.170679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.897 [2024-04-17 08:31:25.170690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:51.897 [2024-04-17 08:31:25.174958] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.897 [2024-04-17 08:31:25.175008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.897 [2024-04-17 08:31:25.175018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.897 [2024-04-17 08:31:25.179385] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.897 [2024-04-17 08:31:25.179433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.897 [2024-04-17 08:31:25.179443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.897 [2024-04-17 08:31:25.183690] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.897 [2024-04-17 08:31:25.183736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.897 [2024-04-17 08:31:25.183746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.897 [2024-04-17 08:31:25.188059] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.897 [2024-04-17 08:31:25.188109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.897 [2024-04-17 08:31:25.188119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:51.897 [2024-04-17 08:31:25.192467] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.897 [2024-04-17 08:31:25.192510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.897 [2024-04-17 08:31:25.192519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.897 [2024-04-17 08:31:25.196802] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.897 [2024-04-17 08:31:25.196847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.897 [2024-04-17 08:31:25.196858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.897 [2024-04-17 08:31:25.201111] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.897 [2024-04-17 08:31:25.201159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.897 [2024-04-17 08:31:25.201168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.897 [2024-04-17 08:31:25.205440] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.897 [2024-04-17 08:31:25.205482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.897 [2024-04-17 08:31:25.205491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:51.897 [2024-04-17 08:31:25.209750] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.897 [2024-04-17 08:31:25.209793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.897 [2024-04-17 08:31:25.209802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.897 [2024-04-17 08:31:25.214019] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.897 [2024-04-17 08:31:25.214060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.897 [2024-04-17 08:31:25.214069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.897 [2024-04-17 08:31:25.218343] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.897 [2024-04-17 08:31:25.218381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.897 [2024-04-17 08:31:25.218390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.897 [2024-04-17 08:31:25.222609] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:51.897 [2024-04-17 08:31:25.222644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.897 [2024-04-17 08:31:25.222653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:52.157 [2024-04-17 08:31:25.226850] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:52.157 [2024-04-17 08:31:25.226889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.157 [2024-04-17 08:31:25.226898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.157 [2024-04-17 08:31:25.231192] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:52.157 [2024-04-17 08:31:25.231238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.157 [2024-04-17 08:31:25.231247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:52.157 [2024-04-17 08:31:25.235482] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:52.157 [2024-04-17 08:31:25.235528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.157 [2024-04-17 08:31:25.235537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:52.157 [2024-04-17 08:31:25.239768] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:52.157 [2024-04-17 08:31:25.239816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.157 [2024-04-17 08:31:25.239825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:52.157 [2024-04-17 08:31:25.244052] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:52.157 [2024-04-17 08:31:25.244094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.157 [2024-04-17 08:31:25.244105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.157 [2024-04-17 08:31:25.248383] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:52.157 [2024-04-17 08:31:25.248422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.157 [2024-04-17 08:31:25.248432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:52.157 [2024-04-17 08:31:25.252716] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:52.157 [2024-04-17 08:31:25.252760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.157 [2024-04-17 08:31:25.252769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:52.157 [2024-04-17 08:31:25.256959] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:52.157 [2024-04-17 08:31:25.257002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.157 [2024-04-17 08:31:25.257012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:52.157 [2024-04-17 08:31:25.261292] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:52.157 [2024-04-17 08:31:25.261346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.157 [2024-04-17 08:31:25.261355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.157 [2024-04-17 08:31:25.265565] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:52.157 [2024-04-17 08:31:25.265605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.157 [2024-04-17 08:31:25.265614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:52.157 [2024-04-17 08:31:25.269913] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:52.157 [2024-04-17 08:31:25.269960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.157 [2024-04-17 08:31:25.269969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:52.157 [2024-04-17 08:31:25.274198] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:52.157 [2024-04-17 08:31:25.274243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.157 [2024-04-17 08:31:25.274254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:52.157 [2024-04-17 08:31:25.278488] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:52.157 [2024-04-17 08:31:25.278525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.157 [2024-04-17 08:31:25.278534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.157 [2024-04-17 08:31:25.282820] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:52.157 [2024-04-17 08:31:25.282862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.157 [2024-04-17 08:31:25.282871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:52.157 [2024-04-17 08:31:25.287110] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:52.157 [2024-04-17 08:31:25.287150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.157 [2024-04-17 08:31:25.287159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:52.157 [2024-04-17 08:31:25.291410] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:52.157 [2024-04-17 08:31:25.291449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.157 [2024-04-17 08:31:25.291459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:52.157 [2024-04-17 08:31:25.295687] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:52.157 [2024-04-17 08:31:25.295733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.157 [2024-04-17 08:31:25.295742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.157 [2024-04-17 08:31:25.299929] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:52.157 [2024-04-17 08:31:25.299970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.157 [2024-04-17 08:31:25.299979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:52.157 [2024-04-17 08:31:25.304213] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:52.157 [2024-04-17 08:31:25.304254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.157 [2024-04-17 08:31:25.304264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:52.157 [2024-04-17 08:31:25.308498] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:52.157 [2024-04-17 08:31:25.308534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.157 [2024-04-17 08:31:25.308542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:52.157 [2024-04-17 08:31:25.312735] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:52.157 [2024-04-17 08:31:25.312774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.157 [2024-04-17 08:31:25.312783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.157 [2024-04-17 08:31:25.316939] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:52.157 [2024-04-17 08:31:25.316976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.157 [2024-04-17 08:31:25.316986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:52.157 [2024-04-17 08:31:25.321268] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:52.157 [2024-04-17 08:31:25.321317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.157 [2024-04-17 08:31:25.321327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:52.157 [2024-04-17 08:31:25.325579] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:52.157 [2024-04-17 08:31:25.325616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.158 [2024-04-17 08:31:25.325625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:52.158 [2024-04-17 08:31:25.329796] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:52.158 [2024-04-17 08:31:25.329831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.158 [2024-04-17 08:31:25.329840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.158 [2024-04-17 08:31:25.334057] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:52.158 [2024-04-17 08:31:25.334092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.158 [2024-04-17 08:31:25.334101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:52.158 [2024-04-17 08:31:25.338327] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6d1b70) 00:34:52.158 [2024-04-17 08:31:25.338362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.158 [2024-04-17 08:31:25.338371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:52.158 00:34:52.158 Latency(us) 00:34:52.158 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:52.158 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:34:52.158 nvme0n1 : 2.00 7163.81 895.48 0.00 0.00 2230.41 1974.67 4693.41 00:34:52.158 =================================================================================================================== 00:34:52.158 Total : 7163.81 895.48 0.00 0.00 2230.41 1974.67 4693.41 00:34:52.158 0 00:34:52.158 08:31:25 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:34:52.158 08:31:25 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:34:52.158 08:31:25 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:34:52.158 08:31:25 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:34:52.158 | .driver_specific 00:34:52.158 | .nvme_error 00:34:52.158 | .status_code 00:34:52.158 | .command_transient_transport_error' 00:34:52.417 08:31:25 -- host/digest.sh@71 -- # (( 462 > 0 )) 00:34:52.417 08:31:25 -- host/digest.sh@73 -- # killprocess 71967 00:34:52.417 08:31:25 -- common/autotest_common.sh@926 -- # '[' -z 71967 ']' 00:34:52.417 08:31:25 -- common/autotest_common.sh@930 -- # kill -0 71967 00:34:52.417 08:31:25 -- common/autotest_common.sh@931 -- # uname 00:34:52.417 08:31:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:34:52.417 08:31:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71967 00:34:52.417 08:31:25 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:34:52.417 killing process with pid 71967 00:34:52.417 08:31:25 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:34:52.417 08:31:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71967' 00:34:52.417 08:31:25 -- common/autotest_common.sh@945 -- # kill 71967 00:34:52.417 Received shutdown signal, test time was about 2.000000 seconds 00:34:52.417 00:34:52.417 Latency(us) 00:34:52.417 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:52.417 =================================================================================================================== 00:34:52.417 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:52.417 08:31:25 -- common/autotest_common.sh@950 -- # wait 71967 00:34:52.675 08:31:25 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:34:52.675 08:31:25 -- host/digest.sh@54 -- # local rw bs qd 00:34:52.675 08:31:25 -- host/digest.sh@56 -- # rw=randwrite 00:34:52.675 08:31:25 -- host/digest.sh@56 -- # bs=4096 00:34:52.675 08:31:25 -- host/digest.sh@56 -- # qd=128 00:34:52.675 08:31:25 -- host/digest.sh@58 -- # bperfpid=72027 00:34:52.675 08:31:25 -- host/digest.sh@60 -- # waitforlisten 72027 /var/tmp/bperf.sock 00:34:52.675 08:31:25 -- common/autotest_common.sh@819 -- # '[' -z 72027 ']' 00:34:52.675 08:31:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:52.675 08:31:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:34:52.675 08:31:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:52.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:52.675 08:31:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:34:52.675 08:31:25 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:34:52.675 08:31:25 -- common/autotest_common.sh@10 -- # set +x 00:34:52.675 [2024-04-17 08:31:25.920474] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:34:52.675 [2024-04-17 08:31:25.920565] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72027 ] 00:34:52.933 [2024-04-17 08:31:26.046343] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:52.933 [2024-04-17 08:31:26.152471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:53.561 08:31:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:34:53.561 08:31:26 -- common/autotest_common.sh@852 -- # return 0 00:34:53.561 08:31:26 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:53.561 08:31:26 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:53.819 08:31:27 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:34:53.820 08:31:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:53.820 08:31:27 -- common/autotest_common.sh@10 -- # set +x 00:34:53.820 08:31:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:53.820 08:31:27 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:53.820 08:31:27 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:54.078 nvme0n1 00:34:54.078 08:31:27 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:34:54.078 08:31:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:54.078 08:31:27 -- common/autotest_common.sh@10 -- # set +x 00:34:54.078 08:31:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:54.078 08:31:27 -- host/digest.sh@69 -- # bperf_py perform_tests 00:34:54.078 08:31:27 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:54.337 Running I/O for 2 seconds... 00:34:54.337 [2024-04-17 08:31:27.526450] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190ddc00 00:34:54.337 [2024-04-17 08:31:27.527798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.337 [2024-04-17 08:31:27.527837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.337 [2024-04-17 08:31:27.541945] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190fef90 00:34:54.337 [2024-04-17 08:31:27.543264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.337 [2024-04-17 08:31:27.543318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.337 [2024-04-17 08:31:27.557442] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190ff3c8 00:34:54.337 [2024-04-17 08:31:27.558761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.337 [2024-04-17 08:31:27.558799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:54.337 [2024-04-17 08:31:27.572984] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190feb58 00:34:54.337 [2024-04-17 08:31:27.574279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.337 [2024-04-17 08:31:27.574330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:54.337 [2024-04-17 08:31:27.588491] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190fe720 00:34:54.337 [2024-04-17 08:31:27.589776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.337 [2024-04-17 08:31:27.589817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:54.337 [2024-04-17 08:31:27.604070] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190fe2e8 00:34:54.337 [2024-04-17 08:31:27.605351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:23641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.337 [2024-04-17 08:31:27.605388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:54.337 [2024-04-17 08:31:27.619622] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190fdeb0 00:34:54.337 [2024-04-17 08:31:27.620876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.337 [2024-04-17 08:31:27.620914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:54.337 [2024-04-17 08:31:27.635054] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190fda78 00:34:54.337 [2024-04-17 08:31:27.636318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.337 [2024-04-17 08:31:27.636355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:54.337 [2024-04-17 08:31:27.650485] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190fd640 00:34:54.337 [2024-04-17 08:31:27.651732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:16573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.337 [2024-04-17 08:31:27.651769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:54.337 [2024-04-17 08:31:27.666039] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190fd208 00:34:54.337 [2024-04-17 08:31:27.667290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:16702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.337 [2024-04-17 08:31:27.667343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:54.596 [2024-04-17 08:31:27.681601] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190fcdd0 00:34:54.596 [2024-04-17 08:31:27.682835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:21931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.596 [2024-04-17 08:31:27.682874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:54.596 [2024-04-17 08:31:27.697146] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190fc998 00:34:54.596 [2024-04-17 08:31:27.698365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:10613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.596 [2024-04-17 08:31:27.698400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:54.596 [2024-04-17 08:31:27.712758] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190fc560 00:34:54.596 [2024-04-17 08:31:27.713961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:12883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.596 [2024-04-17 08:31:27.714002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:54.596 [2024-04-17 08:31:27.729362] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190fc128 00:34:54.596 [2024-04-17 08:31:27.730571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:14911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.596 [2024-04-17 08:31:27.730614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:54.596 [2024-04-17 08:31:27.744874] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190fbcf0 00:34:54.596 [2024-04-17 08:31:27.746071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:7535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.596 [2024-04-17 08:31:27.746108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:54.596 [2024-04-17 08:31:27.760430] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190fb8b8 00:34:54.596 [2024-04-17 08:31:27.761608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.596 [2024-04-17 08:31:27.761645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:54.596 [2024-04-17 08:31:27.776004] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190fb480 00:34:54.596 [2024-04-17 08:31:27.777173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.596 [2024-04-17 08:31:27.777209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:54.596 [2024-04-17 08:31:27.791830] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190fb048 00:34:54.596 [2024-04-17 08:31:27.793022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:6784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.596 [2024-04-17 08:31:27.793061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:54.596 [2024-04-17 08:31:27.807622] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190fac10 00:34:54.596 [2024-04-17 08:31:27.808779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.596 [2024-04-17 08:31:27.808817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:54.596 [2024-04-17 08:31:27.823240] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190fa7d8 00:34:54.596 [2024-04-17 08:31:27.824390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:9975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.596 [2024-04-17 08:31:27.824430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:54.596 [2024-04-17 08:31:27.838822] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190fa3a0 00:34:54.596 [2024-04-17 08:31:27.839963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:6709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.596 [2024-04-17 08:31:27.840000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:54.596 [2024-04-17 08:31:27.854375] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190f9f68 00:34:54.596 [2024-04-17 08:31:27.855508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:15580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.596 [2024-04-17 08:31:27.855543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:54.596 [2024-04-17 08:31:27.869966] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190f9b30 00:34:54.596 [2024-04-17 08:31:27.871095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:23227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.596 [2024-04-17 08:31:27.871137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:54.596 [2024-04-17 08:31:27.885684] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190f96f8 00:34:54.596 [2024-04-17 08:31:27.886814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.596 [2024-04-17 08:31:27.886856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:54.596 [2024-04-17 08:31:27.901344] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190f92c0 00:34:54.596 [2024-04-17 08:31:27.902437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:18664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.596 [2024-04-17 08:31:27.902473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:54.596 [2024-04-17 08:31:27.916875] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190f8e88 00:34:54.596 [2024-04-17 08:31:27.917958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:3545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.596 [2024-04-17 08:31:27.917995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:54.855 [2024-04-17 08:31:27.932342] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190f8a50 00:34:54.855 [2024-04-17 08:31:27.933409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:18522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.855 [2024-04-17 08:31:27.933449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:54.855 [2024-04-17 08:31:27.947967] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190f8618 00:34:54.855 [2024-04-17 08:31:27.949047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:13382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.855 [2024-04-17 08:31:27.949087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:54.855 [2024-04-17 08:31:27.963626] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190f81e0 00:34:54.855 [2024-04-17 08:31:27.964684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:14001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.855 [2024-04-17 08:31:27.964720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:54.855 [2024-04-17 08:31:27.979315] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190f7da8 00:34:54.855 [2024-04-17 08:31:27.980362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.855 [2024-04-17 08:31:27.980400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:54.855 [2024-04-17 08:31:27.994896] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190f7970 00:34:54.855 [2024-04-17 08:31:27.995924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:5632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.855 [2024-04-17 08:31:27.995959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:54.855 [2024-04-17 08:31:28.010406] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190f7538 00:34:54.855 [2024-04-17 08:31:28.011445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:6741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.855 [2024-04-17 08:31:28.011487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:54.855 [2024-04-17 08:31:28.027003] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190f7100 00:34:54.855 [2024-04-17 08:31:28.028017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:9787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.855 [2024-04-17 08:31:28.028060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:54.855 [2024-04-17 08:31:28.042639] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190f6cc8 00:34:54.855 [2024-04-17 08:31:28.043643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:21057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.855 [2024-04-17 08:31:28.043686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:54.855 [2024-04-17 08:31:28.058159] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190f6890 00:34:54.855 [2024-04-17 08:31:28.059162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:23981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.855 [2024-04-17 08:31:28.059199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:54.855 [2024-04-17 08:31:28.073730] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190f6458 00:34:54.855 [2024-04-17 08:31:28.074725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.855 [2024-04-17 08:31:28.074766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:54.855 [2024-04-17 08:31:28.089255] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190f6020 00:34:54.855 [2024-04-17 08:31:28.090221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:11362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.855 [2024-04-17 08:31:28.090258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:54.855 [2024-04-17 08:31:28.104883] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190f5be8 00:34:54.855 [2024-04-17 08:31:28.105854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:12588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.855 [2024-04-17 08:31:28.105891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:54.855 [2024-04-17 08:31:28.120438] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190f57b0 00:34:54.855 [2024-04-17 08:31:28.121387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:19097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.855 [2024-04-17 08:31:28.121423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:54.855 [2024-04-17 08:31:28.136031] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190f5378 00:34:54.855 [2024-04-17 08:31:28.136974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:18758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.855 [2024-04-17 08:31:28.137009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:54.855 [2024-04-17 08:31:28.151543] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190f4f40 00:34:54.855 [2024-04-17 08:31:28.152475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.855 [2024-04-17 08:31:28.152511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:54.855 [2024-04-17 08:31:28.167205] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190f4b08 00:34:54.855 [2024-04-17 08:31:28.168133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:11703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.855 [2024-04-17 08:31:28.168171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:54.855 [2024-04-17 08:31:28.182791] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190f46d0 00:34:54.855 [2024-04-17 08:31:28.183715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.855 [2024-04-17 08:31:28.183752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:55.114 [2024-04-17 08:31:28.198346] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190f4298 00:34:55.114 [2024-04-17 08:31:28.199259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:10294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.114 [2024-04-17 08:31:28.199297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:55.114 [2024-04-17 08:31:28.214009] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190f3e60 00:34:55.114 [2024-04-17 08:31:28.214941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:22245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.114 [2024-04-17 08:31:28.214984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:55.114 [2024-04-17 08:31:28.229766] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190f3a28 00:34:55.114 [2024-04-17 08:31:28.230674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:1751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.114 [2024-04-17 08:31:28.230716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:55.114 [2024-04-17 08:31:28.245430] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190f35f0 00:34:55.114 [2024-04-17 08:31:28.246297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:24291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.114 [2024-04-17 08:31:28.246342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:55.114 [2024-04-17 08:31:28.261016] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190f31b8 00:34:55.114 [2024-04-17 08:31:28.261894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:12793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.114 [2024-04-17 08:31:28.261933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:55.114 [2024-04-17 08:31:28.276561] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190f2d80 00:34:55.114 [2024-04-17 08:31:28.277420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.114 [2024-04-17 08:31:28.277458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:55.114 [2024-04-17 08:31:28.292188] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190f2948 00:34:55.114 [2024-04-17 08:31:28.293044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:14646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.114 [2024-04-17 08:31:28.293081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:55.114 [2024-04-17 08:31:28.307799] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190f2510 00:34:55.114 [2024-04-17 08:31:28.308634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:24728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.114 [2024-04-17 08:31:28.308670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:55.114 [2024-04-17 08:31:28.323386] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190f20d8 00:34:55.114 [2024-04-17 08:31:28.324210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:23377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.114 [2024-04-17 08:31:28.324251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:55.114 [2024-04-17 08:31:28.339034] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190f1ca0 00:34:55.114 [2024-04-17 08:31:28.339862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:23766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.114 [2024-04-17 08:31:28.339899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:55.114 [2024-04-17 08:31:28.354774] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190f1868 00:34:55.114 [2024-04-17 08:31:28.355601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:18730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.114 [2024-04-17 08:31:28.355640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:55.114 [2024-04-17 08:31:28.370528] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190f1430 00:34:55.114 [2024-04-17 08:31:28.371345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:9273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.114 [2024-04-17 08:31:28.371382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:55.114 [2024-04-17 08:31:28.386301] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190f0ff8 00:34:55.114 [2024-04-17 08:31:28.387112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:17323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.114 [2024-04-17 08:31:28.387152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:55.114 [2024-04-17 08:31:28.401893] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190f0bc0 00:34:55.114 [2024-04-17 08:31:28.402695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:13690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.114 [2024-04-17 08:31:28.402731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:55.114 [2024-04-17 08:31:28.417628] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190f0788 00:34:55.114 [2024-04-17 08:31:28.418408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:21691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.114 [2024-04-17 08:31:28.418445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:55.114 [2024-04-17 08:31:28.433268] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190f0350 00:34:55.114 [2024-04-17 08:31:28.434042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:1256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.114 [2024-04-17 08:31:28.434077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:55.373 [2024-04-17 08:31:28.448958] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190eff18 00:34:55.373 [2024-04-17 08:31:28.449723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:18216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.373 [2024-04-17 08:31:28.449761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:55.373 [2024-04-17 08:31:28.464582] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190efae0 00:34:55.373 [2024-04-17 08:31:28.465326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:24529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.373 [2024-04-17 08:31:28.465360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:55.373 [2024-04-17 08:31:28.480179] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190ef6a8 00:34:55.373 [2024-04-17 08:31:28.480907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:7583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.373 [2024-04-17 08:31:28.480943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:55.373 [2024-04-17 08:31:28.495691] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190ef270 00:34:55.373 [2024-04-17 08:31:28.496401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:17144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.373 [2024-04-17 08:31:28.496435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:55.373 [2024-04-17 08:31:28.511225] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190eee38 00:34:55.373 [2024-04-17 08:31:28.511944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.373 [2024-04-17 08:31:28.511981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:55.373 [2024-04-17 08:31:28.526987] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190eea00 00:34:55.373 [2024-04-17 08:31:28.527692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.373 [2024-04-17 08:31:28.527732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:55.373 [2024-04-17 08:31:28.542641] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190ee5c8 00:34:55.373 [2024-04-17 08:31:28.543339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.373 [2024-04-17 08:31:28.543377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:55.373 [2024-04-17 08:31:28.558243] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190ee190 00:34:55.373 [2024-04-17 08:31:28.558934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:5614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.373 [2024-04-17 08:31:28.558978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:55.373 [2024-04-17 08:31:28.573807] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190edd58 00:34:55.373 [2024-04-17 08:31:28.574486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.373 [2024-04-17 08:31:28.574532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:55.373 [2024-04-17 08:31:28.589445] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190ed920 00:34:55.373 [2024-04-17 08:31:28.590107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:11376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.373 [2024-04-17 08:31:28.590150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:55.373 [2024-04-17 08:31:28.605183] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190ed4e8 00:34:55.373 [2024-04-17 08:31:28.605843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.373 [2024-04-17 08:31:28.605886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:55.373 [2024-04-17 08:31:28.621404] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190ed0b0 00:34:55.373 [2024-04-17 08:31:28.622050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:8828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.373 [2024-04-17 08:31:28.622104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:55.373 [2024-04-17 08:31:28.638245] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190ecc78 00:34:55.373 [2024-04-17 08:31:28.638897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:13816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.373 [2024-04-17 08:31:28.638949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:55.373 [2024-04-17 08:31:28.655136] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190ec840 00:34:55.373 [2024-04-17 08:31:28.655757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.373 [2024-04-17 08:31:28.655801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:55.373 [2024-04-17 08:31:28.670617] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190ec408 00:34:55.373 [2024-04-17 08:31:28.671222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:11970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.373 [2024-04-17 08:31:28.671261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:55.373 [2024-04-17 08:31:28.686166] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190ebfd0 00:34:55.373 [2024-04-17 08:31:28.686782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:7303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.373 [2024-04-17 08:31:28.686822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:55.373 [2024-04-17 08:31:28.701693] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190ebb98 00:34:55.373 [2024-04-17 08:31:28.702282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:20218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.373 [2024-04-17 08:31:28.702329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:55.632 [2024-04-17 08:31:28.717273] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190eb760 00:34:55.632 [2024-04-17 08:31:28.717855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:21074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.632 [2024-04-17 08:31:28.717893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:55.632 [2024-04-17 08:31:28.732877] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190eb328 00:34:55.632 [2024-04-17 08:31:28.733460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:2532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.632 [2024-04-17 08:31:28.733501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:55.632 [2024-04-17 08:31:28.748678] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190eaef0 00:34:55.632 [2024-04-17 08:31:28.749253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:10528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.632 [2024-04-17 08:31:28.749291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:55.632 [2024-04-17 08:31:28.764439] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190eaab8 00:34:55.632 [2024-04-17 08:31:28.764994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.632 [2024-04-17 08:31:28.765032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:55.632 [2024-04-17 08:31:28.780103] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190ea680 00:34:55.632 [2024-04-17 08:31:28.780663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:3439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.632 [2024-04-17 08:31:28.780701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:55.632 [2024-04-17 08:31:28.795757] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190ea248 00:34:55.632 [2024-04-17 08:31:28.796291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.632 [2024-04-17 08:31:28.796335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:55.632 [2024-04-17 08:31:28.811367] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190e9e10 00:34:55.632 [2024-04-17 08:31:28.811893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:19598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.632 [2024-04-17 08:31:28.811927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:55.632 [2024-04-17 08:31:28.827079] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190e99d8 00:34:55.632 [2024-04-17 08:31:28.827613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:8046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.632 [2024-04-17 08:31:28.827649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:55.632 [2024-04-17 08:31:28.842863] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190e95a0 00:34:55.632 [2024-04-17 08:31:28.843386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:9234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.632 [2024-04-17 08:31:28.843427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:55.632 [2024-04-17 08:31:28.858540] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190e9168 00:34:55.632 [2024-04-17 08:31:28.859047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:11179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.632 [2024-04-17 08:31:28.859084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:55.632 [2024-04-17 08:31:28.874242] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190e8d30 00:34:55.632 [2024-04-17 08:31:28.874753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:24642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.632 [2024-04-17 08:31:28.874791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:55.632 [2024-04-17 08:31:28.889916] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190e88f8 00:34:55.632 [2024-04-17 08:31:28.890407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:7694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.632 [2024-04-17 08:31:28.890445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:55.632 [2024-04-17 08:31:28.905664] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190e84c0 00:34:55.632 [2024-04-17 08:31:28.906133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:6536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.632 [2024-04-17 08:31:28.906170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:55.632 [2024-04-17 08:31:28.921353] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190e8088 00:34:55.632 [2024-04-17 08:31:28.921810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.632 [2024-04-17 08:31:28.921845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:55.632 [2024-04-17 08:31:28.937034] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190e7c50 00:34:55.632 [2024-04-17 08:31:28.937498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:17632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.632 [2024-04-17 08:31:28.937533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:55.632 [2024-04-17 08:31:28.952678] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190e7818 00:34:55.632 [2024-04-17 08:31:28.953117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:15757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.632 [2024-04-17 08:31:28.953155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:55.892 [2024-04-17 08:31:28.968273] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190e73e0 00:34:55.892 [2024-04-17 08:31:28.968704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:15179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.892 [2024-04-17 08:31:28.968738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:55.892 [2024-04-17 08:31:28.983680] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190e6fa8 00:34:55.892 [2024-04-17 08:31:28.984093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:12165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.892 [2024-04-17 08:31:28.984127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:55.892 [2024-04-17 08:31:28.999034] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190e6b70 00:34:55.892 [2024-04-17 08:31:28.999437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:11324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.892 [2024-04-17 08:31:28.999470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:55.892 [2024-04-17 08:31:29.014400] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190e6738 00:34:55.892 [2024-04-17 08:31:29.014800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.892 [2024-04-17 08:31:29.014836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:55.892 [2024-04-17 08:31:29.029869] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190e6300 00:34:55.892 [2024-04-17 08:31:29.030250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.892 [2024-04-17 08:31:29.030283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.892 [2024-04-17 08:31:29.045401] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190e5ec8 00:34:55.892 [2024-04-17 08:31:29.045782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.892 [2024-04-17 08:31:29.045818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:55.892 [2024-04-17 08:31:29.061098] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190e5a90 00:34:55.892 [2024-04-17 08:31:29.061488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:10073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.892 [2024-04-17 08:31:29.061524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:55.892 [2024-04-17 08:31:29.076759] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190e5658 00:34:55.892 [2024-04-17 08:31:29.077127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:1301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.892 [2024-04-17 08:31:29.077165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:55.892 [2024-04-17 08:31:29.092429] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190e5220 00:34:55.892 [2024-04-17 08:31:29.092784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:7322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.892 [2024-04-17 08:31:29.092823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:55.892 [2024-04-17 08:31:29.107999] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190e4de8 00:34:55.892 [2024-04-17 08:31:29.108349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:8685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.892 [2024-04-17 08:31:29.108389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:55.892 [2024-04-17 08:31:29.123490] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190e49b0 00:34:55.892 [2024-04-17 08:31:29.123814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.892 [2024-04-17 08:31:29.123850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:55.892 [2024-04-17 08:31:29.138884] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190e4578 00:34:55.892 [2024-04-17 08:31:29.139194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:11725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.892 [2024-04-17 08:31:29.139232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:55.892 [2024-04-17 08:31:29.154284] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190e4140 00:34:55.892 [2024-04-17 08:31:29.154605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:12301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.892 [2024-04-17 08:31:29.154656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:55.892 [2024-04-17 08:31:29.169797] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190e3d08 00:34:55.892 [2024-04-17 08:31:29.170084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:14176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.892 [2024-04-17 08:31:29.170119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:55.892 [2024-04-17 08:31:29.185412] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190e38d0 00:34:55.892 [2024-04-17 08:31:29.185699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:5332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.892 [2024-04-17 08:31:29.185734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:55.892 [2024-04-17 08:31:29.200968] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190e3498 00:34:55.892 [2024-04-17 08:31:29.201248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:12481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.892 [2024-04-17 08:31:29.201286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:55.892 [2024-04-17 08:31:29.216518] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190e3060 00:34:55.892 [2024-04-17 08:31:29.216772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:20379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.892 [2024-04-17 08:31:29.216803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:56.152 [2024-04-17 08:31:29.232012] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190e2c28 00:34:56.152 [2024-04-17 08:31:29.232268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:23138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.152 [2024-04-17 08:31:29.232301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:56.152 [2024-04-17 08:31:29.247666] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190e27f0 00:34:56.152 [2024-04-17 08:31:29.247967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:7978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.152 [2024-04-17 08:31:29.247995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:56.152 [2024-04-17 08:31:29.263282] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190e23b8 00:34:56.152 [2024-04-17 08:31:29.263534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.152 [2024-04-17 08:31:29.263565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:56.153 [2024-04-17 08:31:29.278816] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190e1f80 00:34:56.153 [2024-04-17 08:31:29.279047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.153 [2024-04-17 08:31:29.279073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:56.153 [2024-04-17 08:31:29.294264] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190e1b48 00:34:56.153 [2024-04-17 08:31:29.294491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:23194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.153 [2024-04-17 08:31:29.294521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:56.153 [2024-04-17 08:31:29.309769] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190e1710 00:34:56.153 [2024-04-17 08:31:29.309983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:15663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.153 [2024-04-17 08:31:29.310009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:56.153 [2024-04-17 08:31:29.325227] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190e12d8 00:34:56.153 [2024-04-17 08:31:29.325427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:20046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.153 [2024-04-17 08:31:29.325454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:56.153 [2024-04-17 08:31:29.340642] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190e0ea0 00:34:56.153 [2024-04-17 08:31:29.340829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:12009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.153 [2024-04-17 08:31:29.340855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:56.153 [2024-04-17 08:31:29.356115] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190e0a68 00:34:56.153 [2024-04-17 08:31:29.356291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:11101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.153 [2024-04-17 08:31:29.356330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:56.153 [2024-04-17 08:31:29.371613] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190e0630 00:34:56.153 [2024-04-17 08:31:29.371784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:10710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.153 [2024-04-17 08:31:29.371811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:56.153 [2024-04-17 08:31:29.387011] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190e01f8 00:34:56.153 [2024-04-17 08:31:29.387174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.153 [2024-04-17 08:31:29.387202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:56.153 [2024-04-17 08:31:29.402485] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190dfdc0 00:34:56.153 [2024-04-17 08:31:29.402648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:17423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.153 [2024-04-17 08:31:29.402676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:56.153 [2024-04-17 08:31:29.417936] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190df988 00:34:56.153 [2024-04-17 08:31:29.418079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:19895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.153 [2024-04-17 08:31:29.418108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:56.153 [2024-04-17 08:31:29.433419] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190df550 00:34:56.153 [2024-04-17 08:31:29.433552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:6980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.153 [2024-04-17 08:31:29.433581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:56.153 [2024-04-17 08:31:29.448847] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190df118 00:34:56.153 [2024-04-17 08:31:29.448975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.153 [2024-04-17 08:31:29.449004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:56.153 [2024-04-17 08:31:29.464603] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190dece0 00:34:56.153 [2024-04-17 08:31:29.464734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:21537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.153 [2024-04-17 08:31:29.464763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:56.153 [2024-04-17 08:31:29.480252] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190de8a8 00:34:56.153 [2024-04-17 08:31:29.480373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.153 [2024-04-17 08:31:29.480404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:56.445 [2024-04-17 08:31:29.495786] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9f030) with pdu=0x2000190de038 00:34:56.445 [2024-04-17 08:31:29.495884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.445 [2024-04-17 08:31:29.495913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:56.445 00:34:56.445 Latency(us) 00:34:56.445 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:56.445 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:56.445 nvme0n1 : 2.01 16215.87 63.34 0.00 0.00 7886.72 7240.44 21978.89 00:34:56.445 =================================================================================================================== 00:34:56.445 Total : 16215.87 63.34 0.00 0.00 7886.72 7240.44 21978.89 00:34:56.445 0 00:34:56.445 08:31:29 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:34:56.445 08:31:29 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:34:56.445 08:31:29 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:34:56.445 08:31:29 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:34:56.445 | .driver_specific 00:34:56.445 | .nvme_error 00:34:56.445 | .status_code 00:34:56.445 | .command_transient_transport_error' 00:34:56.445 08:31:29 -- host/digest.sh@71 -- # (( 127 > 0 )) 00:34:56.445 08:31:29 -- host/digest.sh@73 -- # killprocess 72027 00:34:56.445 08:31:29 -- common/autotest_common.sh@926 -- # '[' -z 72027 ']' 00:34:56.445 08:31:29 -- common/autotest_common.sh@930 -- # kill -0 72027 00:34:56.445 08:31:29 -- common/autotest_common.sh@931 -- # uname 00:34:56.445 08:31:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:34:56.445 08:31:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 72027 00:34:56.706 killing process with pid 72027 00:34:56.706 Received shutdown signal, test time was about 2.000000 seconds 00:34:56.706 00:34:56.706 Latency(us) 00:34:56.706 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:56.706 =================================================================================================================== 00:34:56.706 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:56.706 08:31:29 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:34:56.706 08:31:29 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:34:56.706 08:31:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 72027' 00:34:56.706 08:31:29 -- common/autotest_common.sh@945 -- # kill 72027 00:34:56.706 08:31:29 -- common/autotest_common.sh@950 -- # wait 72027 00:34:56.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:56.706 08:31:29 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:34:56.706 08:31:29 -- host/digest.sh@54 -- # local rw bs qd 00:34:56.706 08:31:29 -- host/digest.sh@56 -- # rw=randwrite 00:34:56.706 08:31:29 -- host/digest.sh@56 -- # bs=131072 00:34:56.706 08:31:29 -- host/digest.sh@56 -- # qd=16 00:34:56.706 08:31:29 -- host/digest.sh@58 -- # bperfpid=72083 00:34:56.706 08:31:29 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:34:56.706 08:31:29 -- host/digest.sh@60 -- # waitforlisten 72083 /var/tmp/bperf.sock 00:34:56.706 08:31:29 -- common/autotest_common.sh@819 -- # '[' -z 72083 ']' 00:34:56.706 08:31:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:56.706 08:31:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:34:56.706 08:31:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:56.706 08:31:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:34:56.706 08:31:29 -- common/autotest_common.sh@10 -- # set +x 00:34:56.965 [2024-04-17 08:31:30.042561] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:34:56.965 [2024-04-17 08:31:30.042776] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72083 ] 00:34:56.965 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:56.965 Zero copy mechanism will not be used. 00:34:56.965 [2024-04-17 08:31:30.187936] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:56.965 [2024-04-17 08:31:30.294258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:57.904 08:31:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:34:57.904 08:31:30 -- common/autotest_common.sh@852 -- # return 0 00:34:57.904 08:31:30 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:57.904 08:31:30 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:57.904 08:31:31 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:34:57.904 08:31:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:57.904 08:31:31 -- common/autotest_common.sh@10 -- # set +x 00:34:57.904 08:31:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:57.904 08:31:31 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:57.904 08:31:31 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:58.163 nvme0n1 00:34:58.163 08:31:31 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:34:58.163 08:31:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:58.163 08:31:31 -- common/autotest_common.sh@10 -- # set +x 00:34:58.163 08:31:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:58.163 08:31:31 -- host/digest.sh@69 -- # bperf_py perform_tests 00:34:58.163 08:31:31 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:58.424 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:58.424 Zero copy mechanism will not be used. 00:34:58.424 Running I/O for 2 seconds... 00:34:58.424 [2024-04-17 08:31:31.538081] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.424 [2024-04-17 08:31:31.538684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.424 [2024-04-17 08:31:31.538785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:58.424 [2024-04-17 08:31:31.542849] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.424 [2024-04-17 08:31:31.543371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.424 [2024-04-17 08:31:31.543460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:58.424 [2024-04-17 08:31:31.547276] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.424 [2024-04-17 08:31:31.547789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.424 [2024-04-17 08:31:31.547869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:58.424 [2024-04-17 08:31:31.551698] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.424 [2024-04-17 08:31:31.552212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.424 [2024-04-17 08:31:31.552294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.424 [2024-04-17 08:31:31.556050] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.424 [2024-04-17 08:31:31.556558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.424 [2024-04-17 08:31:31.556638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:58.424 [2024-04-17 08:31:31.560270] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.424 [2024-04-17 08:31:31.560814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.424 [2024-04-17 08:31:31.560888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:58.424 [2024-04-17 08:31:31.564522] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.424 [2024-04-17 08:31:31.564994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.424 [2024-04-17 08:31:31.565055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:58.424 [2024-04-17 08:31:31.568632] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.424 [2024-04-17 08:31:31.569128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.424 [2024-04-17 08:31:31.569207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.424 [2024-04-17 08:31:31.572888] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.424 [2024-04-17 08:31:31.573407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.424 [2024-04-17 08:31:31.573481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:58.424 [2024-04-17 08:31:31.577258] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.424 [2024-04-17 08:31:31.577765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.424 [2024-04-17 08:31:31.577846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:58.424 [2024-04-17 08:31:31.581809] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.424 [2024-04-17 08:31:31.582342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.424 [2024-04-17 08:31:31.582420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:58.424 [2024-04-17 08:31:31.586399] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.424 [2024-04-17 08:31:31.586944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.424 [2024-04-17 08:31:31.587029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.424 [2024-04-17 08:31:31.591073] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.424 [2024-04-17 08:31:31.591613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.424 [2024-04-17 08:31:31.591694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:58.424 [2024-04-17 08:31:31.595789] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.424 [2024-04-17 08:31:31.596338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.424 [2024-04-17 08:31:31.596419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:58.424 [2024-04-17 08:31:31.600465] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.424 [2024-04-17 08:31:31.600979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.424 [2024-04-17 08:31:31.601064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:58.424 [2024-04-17 08:31:31.605031] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.424 [2024-04-17 08:31:31.605558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.424 [2024-04-17 08:31:31.605642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.424 [2024-04-17 08:31:31.609632] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.424 [2024-04-17 08:31:31.610159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.424 [2024-04-17 08:31:31.610243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:58.424 [2024-04-17 08:31:31.614163] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.424 [2024-04-17 08:31:31.614690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.424 [2024-04-17 08:31:31.614770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:58.424 [2024-04-17 08:31:31.618413] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.424 [2024-04-17 08:31:31.618941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.424 [2024-04-17 08:31:31.619020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:58.424 [2024-04-17 08:31:31.622688] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.424 [2024-04-17 08:31:31.623190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.424 [2024-04-17 08:31:31.623269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.424 [2024-04-17 08:31:31.626870] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.424 [2024-04-17 08:31:31.627384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.424 [2024-04-17 08:31:31.627462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:58.424 [2024-04-17 08:31:31.631100] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.424 [2024-04-17 08:31:31.631600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.424 [2024-04-17 08:31:31.631680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:58.424 [2024-04-17 08:31:31.635375] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.425 [2024-04-17 08:31:31.635797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.425 [2024-04-17 08:31:31.635823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:58.425 [2024-04-17 08:31:31.639531] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.425 [2024-04-17 08:31:31.639969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.425 [2024-04-17 08:31:31.639991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.425 [2024-04-17 08:31:31.643484] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.425 [2024-04-17 08:31:31.643937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.425 [2024-04-17 08:31:31.643961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:58.425 [2024-04-17 08:31:31.647694] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.425 [2024-04-17 08:31:31.648136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.425 [2024-04-17 08:31:31.648168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:58.425 [2024-04-17 08:31:31.651745] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.425 [2024-04-17 08:31:31.652167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.425 [2024-04-17 08:31:31.652197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:58.425 [2024-04-17 08:31:31.655954] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.425 [2024-04-17 08:31:31.656382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.425 [2024-04-17 08:31:31.656428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.425 [2024-04-17 08:31:31.659862] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.425 [2024-04-17 08:31:31.660303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.425 [2024-04-17 08:31:31.660362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:58.425 [2024-04-17 08:31:31.664050] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.425 [2024-04-17 08:31:31.664506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.425 [2024-04-17 08:31:31.664533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:58.425 [2024-04-17 08:31:31.668253] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.425 [2024-04-17 08:31:31.668714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.425 [2024-04-17 08:31:31.668743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:58.425 [2024-04-17 08:31:31.672450] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.425 [2024-04-17 08:31:31.672885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.425 [2024-04-17 08:31:31.672917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.425 [2024-04-17 08:31:31.676662] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.425 [2024-04-17 08:31:31.677091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.425 [2024-04-17 08:31:31.677117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:58.425 [2024-04-17 08:31:31.680721] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.425 [2024-04-17 08:31:31.681148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.425 [2024-04-17 08:31:31.681179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:58.425 [2024-04-17 08:31:31.684823] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.425 [2024-04-17 08:31:31.685244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.425 [2024-04-17 08:31:31.685275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:58.425 [2024-04-17 08:31:31.688840] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.425 [2024-04-17 08:31:31.689256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.425 [2024-04-17 08:31:31.689278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.425 [2024-04-17 08:31:31.692778] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.425 [2024-04-17 08:31:31.693189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.425 [2024-04-17 08:31:31.693212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:58.425 [2024-04-17 08:31:31.696735] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.425 [2024-04-17 08:31:31.697125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.425 [2024-04-17 08:31:31.697146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:58.425 [2024-04-17 08:31:31.700638] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.425 [2024-04-17 08:31:31.701062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.425 [2024-04-17 08:31:31.701093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:58.425 [2024-04-17 08:31:31.704644] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.425 [2024-04-17 08:31:31.705053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.425 [2024-04-17 08:31:31.705075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.425 [2024-04-17 08:31:31.708427] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.425 [2024-04-17 08:31:31.708849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.425 [2024-04-17 08:31:31.708879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:58.425 [2024-04-17 08:31:31.712467] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.425 [2024-04-17 08:31:31.712881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.425 [2024-04-17 08:31:31.712911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:58.425 [2024-04-17 08:31:31.716499] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.425 [2024-04-17 08:31:31.716952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.425 [2024-04-17 08:31:31.716976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:58.425 [2024-04-17 08:31:31.720719] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.425 [2024-04-17 08:31:31.721166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.425 [2024-04-17 08:31:31.721190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.425 [2024-04-17 08:31:31.725006] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.425 [2024-04-17 08:31:31.725448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.425 [2024-04-17 08:31:31.725472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:58.425 [2024-04-17 08:31:31.729211] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.425 [2024-04-17 08:31:31.729649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.425 [2024-04-17 08:31:31.729672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:58.425 [2024-04-17 08:31:31.733473] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.425 [2024-04-17 08:31:31.733919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.425 [2024-04-17 08:31:31.733948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:58.425 [2024-04-17 08:31:31.737866] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.425 [2024-04-17 08:31:31.738324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.425 [2024-04-17 08:31:31.738354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.425 [2024-04-17 08:31:31.742206] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.425 [2024-04-17 08:31:31.742672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.426 [2024-04-17 08:31:31.742698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:58.426 [2024-04-17 08:31:31.746586] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.426 [2024-04-17 08:31:31.747041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.426 [2024-04-17 08:31:31.747089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:58.426 [2024-04-17 08:31:31.750885] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.426 [2024-04-17 08:31:31.751341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.426 [2024-04-17 08:31:31.751371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:58.687 [2024-04-17 08:31:31.755113] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.687 [2024-04-17 08:31:31.755568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.687 [2024-04-17 08:31:31.755598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.687 [2024-04-17 08:31:31.759396] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.687 [2024-04-17 08:31:31.759834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.687 [2024-04-17 08:31:31.759864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:58.687 [2024-04-17 08:31:31.763605] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.687 [2024-04-17 08:31:31.764032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.687 [2024-04-17 08:31:31.764063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:58.687 [2024-04-17 08:31:31.767835] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.687 [2024-04-17 08:31:31.768289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.687 [2024-04-17 08:31:31.768334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:58.687 [2024-04-17 08:31:31.772147] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.687 [2024-04-17 08:31:31.772623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.687 [2024-04-17 08:31:31.772653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.687 [2024-04-17 08:31:31.776443] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.687 [2024-04-17 08:31:31.776897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.687 [2024-04-17 08:31:31.776929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:58.687 [2024-04-17 08:31:31.780703] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.687 [2024-04-17 08:31:31.781154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.687 [2024-04-17 08:31:31.781186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:58.687 [2024-04-17 08:31:31.785089] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.687 [2024-04-17 08:31:31.785538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.687 [2024-04-17 08:31:31.785569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:58.687 [2024-04-17 08:31:31.789376] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.687 [2024-04-17 08:31:31.789831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.687 [2024-04-17 08:31:31.789862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.687 [2024-04-17 08:31:31.793684] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.687 [2024-04-17 08:31:31.794134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.687 [2024-04-17 08:31:31.794165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:58.687 [2024-04-17 08:31:31.798090] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.687 [2024-04-17 08:31:31.798550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.687 [2024-04-17 08:31:31.798581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:58.687 [2024-04-17 08:31:31.802371] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.687 [2024-04-17 08:31:31.802848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.687 [2024-04-17 08:31:31.802881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:58.687 [2024-04-17 08:31:31.806710] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.687 [2024-04-17 08:31:31.807143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.687 [2024-04-17 08:31:31.807176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.687 [2024-04-17 08:31:31.811088] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.687 [2024-04-17 08:31:31.811549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.687 [2024-04-17 08:31:31.811579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:58.687 [2024-04-17 08:31:31.815581] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.687 [2024-04-17 08:31:31.816081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.687 [2024-04-17 08:31:31.816113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:58.687 [2024-04-17 08:31:31.820111] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.687 [2024-04-17 08:31:31.820600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.687 [2024-04-17 08:31:31.820629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:58.687 [2024-04-17 08:31:31.824524] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.687 [2024-04-17 08:31:31.824978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.687 [2024-04-17 08:31:31.825003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.687 [2024-04-17 08:31:31.829050] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.687 [2024-04-17 08:31:31.829506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.687 [2024-04-17 08:31:31.829531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:58.687 [2024-04-17 08:31:31.833365] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.687 [2024-04-17 08:31:31.833819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.687 [2024-04-17 08:31:31.833843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:58.687 [2024-04-17 08:31:31.837689] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.687 [2024-04-17 08:31:31.838141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.687 [2024-04-17 08:31:31.838167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:58.687 [2024-04-17 08:31:31.842118] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.687 [2024-04-17 08:31:31.842586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.687 [2024-04-17 08:31:31.842621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.687 [2024-04-17 08:31:31.846414] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.687 [2024-04-17 08:31:31.846876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.687 [2024-04-17 08:31:31.846901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:58.687 [2024-04-17 08:31:31.850674] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.687 [2024-04-17 08:31:31.851132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.687 [2024-04-17 08:31:31.851166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:58.687 [2024-04-17 08:31:31.855136] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.687 [2024-04-17 08:31:31.855598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.687 [2024-04-17 08:31:31.855632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:58.687 [2024-04-17 08:31:31.859421] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.687 [2024-04-17 08:31:31.859861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.687 [2024-04-17 08:31:31.859893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.687 [2024-04-17 08:31:31.863726] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.687 [2024-04-17 08:31:31.864179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.687 [2024-04-17 08:31:31.864212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:58.687 [2024-04-17 08:31:31.868138] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.688 [2024-04-17 08:31:31.868613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.688 [2024-04-17 08:31:31.868646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:58.688 [2024-04-17 08:31:31.872550] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.688 [2024-04-17 08:31:31.873017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.688 [2024-04-17 08:31:31.873051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:58.688 [2024-04-17 08:31:31.876931] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.688 [2024-04-17 08:31:31.877410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.688 [2024-04-17 08:31:31.877443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.688 [2024-04-17 08:31:31.881276] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.688 [2024-04-17 08:31:31.881759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.688 [2024-04-17 08:31:31.881793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:58.688 [2024-04-17 08:31:31.885700] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.688 [2024-04-17 08:31:31.886174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.688 [2024-04-17 08:31:31.886207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:58.688 [2024-04-17 08:31:31.890121] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.688 [2024-04-17 08:31:31.890607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.688 [2024-04-17 08:31:31.890640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:58.688 [2024-04-17 08:31:31.894556] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.688 [2024-04-17 08:31:31.895023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.688 [2024-04-17 08:31:31.895055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.688 [2024-04-17 08:31:31.898945] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.688 [2024-04-17 08:31:31.899427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.688 [2024-04-17 08:31:31.899463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:58.688 [2024-04-17 08:31:31.903353] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.688 [2024-04-17 08:31:31.903818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.688 [2024-04-17 08:31:31.903852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:58.688 [2024-04-17 08:31:31.907747] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.688 [2024-04-17 08:31:31.908221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.688 [2024-04-17 08:31:31.908255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:58.688 [2024-04-17 08:31:31.912150] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.688 [2024-04-17 08:31:31.912641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.688 [2024-04-17 08:31:31.912681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.688 [2024-04-17 08:31:31.916678] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.688 [2024-04-17 08:31:31.917135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.688 [2024-04-17 08:31:31.917169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:58.688 [2024-04-17 08:31:31.921186] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.688 [2024-04-17 08:31:31.921680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.688 [2024-04-17 08:31:31.921710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:58.688 [2024-04-17 08:31:31.925656] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.688 [2024-04-17 08:31:31.926111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.688 [2024-04-17 08:31:31.926139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:58.688 [2024-04-17 08:31:31.930035] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.688 [2024-04-17 08:31:31.930507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.688 [2024-04-17 08:31:31.930557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.688 [2024-04-17 08:31:31.934449] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.688 [2024-04-17 08:31:31.934949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.688 [2024-04-17 08:31:31.934983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:58.688 [2024-04-17 08:31:31.938936] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.688 [2024-04-17 08:31:31.939424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.688 [2024-04-17 08:31:31.939456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:58.688 [2024-04-17 08:31:31.943363] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.688 [2024-04-17 08:31:31.943807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.688 [2024-04-17 08:31:31.943840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:58.688 [2024-04-17 08:31:31.947655] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.688 [2024-04-17 08:31:31.948120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.688 [2024-04-17 08:31:31.948155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.688 [2024-04-17 08:31:31.952274] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.688 [2024-04-17 08:31:31.952777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.688 [2024-04-17 08:31:31.952811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:58.688 [2024-04-17 08:31:31.956789] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.688 [2024-04-17 08:31:31.957257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.688 [2024-04-17 08:31:31.957291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:58.688 [2024-04-17 08:31:31.961243] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.688 [2024-04-17 08:31:31.961720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.688 [2024-04-17 08:31:31.961750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:58.688 [2024-04-17 08:31:31.965748] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.688 [2024-04-17 08:31:31.966232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.688 [2024-04-17 08:31:31.966270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.688 [2024-04-17 08:31:31.970984] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.688 [2024-04-17 08:31:31.971488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.688 [2024-04-17 08:31:31.971530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:58.688 [2024-04-17 08:31:31.975769] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.688 [2024-04-17 08:31:31.976241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.688 [2024-04-17 08:31:31.976278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:58.688 [2024-04-17 08:31:31.981126] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.688 [2024-04-17 08:31:31.981617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.688 [2024-04-17 08:31:31.981655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:58.688 [2024-04-17 08:31:31.985689] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.688 [2024-04-17 08:31:31.986165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.688 [2024-04-17 08:31:31.986202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.688 [2024-04-17 08:31:31.990218] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.689 [2024-04-17 08:31:31.990718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.689 [2024-04-17 08:31:31.990753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:58.689 [2024-04-17 08:31:31.994903] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.689 [2024-04-17 08:31:31.995385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.689 [2024-04-17 08:31:31.995419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:58.689 [2024-04-17 08:31:31.999428] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.689 [2024-04-17 08:31:31.999889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.689 [2024-04-17 08:31:31.999924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:58.689 [2024-04-17 08:31:32.003887] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.689 [2024-04-17 08:31:32.004345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.689 [2024-04-17 08:31:32.004377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.689 [2024-04-17 08:31:32.008413] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.689 [2024-04-17 08:31:32.008892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.689 [2024-04-17 08:31:32.008927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:58.689 [2024-04-17 08:31:32.013097] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.689 [2024-04-17 08:31:32.013578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.689 [2024-04-17 08:31:32.013614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:58.689 [2024-04-17 08:31:32.017666] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.949 [2024-04-17 08:31:32.018127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.949 [2024-04-17 08:31:32.018163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:58.949 [2024-04-17 08:31:32.022269] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.949 [2024-04-17 08:31:32.022766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.949 [2024-04-17 08:31:32.022801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.949 [2024-04-17 08:31:32.026836] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.949 [2024-04-17 08:31:32.027323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.949 [2024-04-17 08:31:32.027352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:58.949 [2024-04-17 08:31:32.031386] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.949 [2024-04-17 08:31:32.031860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.949 [2024-04-17 08:31:32.031897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:58.949 [2024-04-17 08:31:32.035926] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.949 [2024-04-17 08:31:32.036412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.949 [2024-04-17 08:31:32.036447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:58.949 [2024-04-17 08:31:32.040524] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.949 [2024-04-17 08:31:32.041002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.949 [2024-04-17 08:31:32.041039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.949 [2024-04-17 08:31:32.044978] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.949 [2024-04-17 08:31:32.045466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.949 [2024-04-17 08:31:32.045503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:58.949 [2024-04-17 08:31:32.049718] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.949 [2024-04-17 08:31:32.050194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.949 [2024-04-17 08:31:32.050230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:58.949 [2024-04-17 08:31:32.054246] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.949 [2024-04-17 08:31:32.054754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.949 [2024-04-17 08:31:32.054790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:58.949 [2024-04-17 08:31:32.058841] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.949 [2024-04-17 08:31:32.059308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.949 [2024-04-17 08:31:32.059349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.949 [2024-04-17 08:31:32.063374] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.949 [2024-04-17 08:31:32.063837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.949 [2024-04-17 08:31:32.063874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:58.949 [2024-04-17 08:31:32.067947] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.949 [2024-04-17 08:31:32.068415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.949 [2024-04-17 08:31:32.068449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:58.949 [2024-04-17 08:31:32.072522] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.949 [2024-04-17 08:31:32.073010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.949 [2024-04-17 08:31:32.073045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:58.949 [2024-04-17 08:31:32.077092] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.949 [2024-04-17 08:31:32.077571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.949 [2024-04-17 08:31:32.077605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.949 [2024-04-17 08:31:32.081602] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.949 [2024-04-17 08:31:32.082063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.949 [2024-04-17 08:31:32.082097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:58.949 [2024-04-17 08:31:32.086182] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.949 [2024-04-17 08:31:32.086671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.949 [2024-04-17 08:31:32.086706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:58.949 [2024-04-17 08:31:32.090896] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.949 [2024-04-17 08:31:32.091367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.949 [2024-04-17 08:31:32.091414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:58.949 [2024-04-17 08:31:32.095377] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.949 [2024-04-17 08:31:32.095476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.949 [2024-04-17 08:31:32.095511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.949 [2024-04-17 08:31:32.100058] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.949 [2024-04-17 08:31:32.100140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.949 [2024-04-17 08:31:32.100165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:58.950 [2024-04-17 08:31:32.104591] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.950 [2024-04-17 08:31:32.104670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.950 [2024-04-17 08:31:32.104695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:58.950 [2024-04-17 08:31:32.109003] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.950 [2024-04-17 08:31:32.109087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.950 [2024-04-17 08:31:32.109110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:58.950 [2024-04-17 08:31:32.113502] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.950 [2024-04-17 08:31:32.113585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.950 [2024-04-17 08:31:32.113606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.950 [2024-04-17 08:31:32.117971] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.950 [2024-04-17 08:31:32.118075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.950 [2024-04-17 08:31:32.118096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:58.950 [2024-04-17 08:31:32.122494] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.950 [2024-04-17 08:31:32.122566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.950 [2024-04-17 08:31:32.122587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:58.950 [2024-04-17 08:31:32.126770] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.950 [2024-04-17 08:31:32.126851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.950 [2024-04-17 08:31:32.126871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:58.950 [2024-04-17 08:31:32.131126] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.950 [2024-04-17 08:31:32.131198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.950 [2024-04-17 08:31:32.131216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.950 [2024-04-17 08:31:32.135510] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.950 [2024-04-17 08:31:32.135587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.950 [2024-04-17 08:31:32.135607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:58.950 [2024-04-17 08:31:32.139912] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.950 [2024-04-17 08:31:32.139999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.950 [2024-04-17 08:31:32.140019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:58.950 [2024-04-17 08:31:32.144470] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.950 [2024-04-17 08:31:32.144548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.950 [2024-04-17 08:31:32.144567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:58.950 [2024-04-17 08:31:32.148600] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.950 [2024-04-17 08:31:32.148674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.950 [2024-04-17 08:31:32.148694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.950 [2024-04-17 08:31:32.152899] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.950 [2024-04-17 08:31:32.152988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.950 [2024-04-17 08:31:32.153008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:58.950 [2024-04-17 08:31:32.157242] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.950 [2024-04-17 08:31:32.157326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.950 [2024-04-17 08:31:32.157347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:58.950 [2024-04-17 08:31:32.161555] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.950 [2024-04-17 08:31:32.161630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.950 [2024-04-17 08:31:32.161649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:58.950 [2024-04-17 08:31:32.165835] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.950 [2024-04-17 08:31:32.165911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.950 [2024-04-17 08:31:32.165929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.950 [2024-04-17 08:31:32.170098] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.950 [2024-04-17 08:31:32.170177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.950 [2024-04-17 08:31:32.170195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:58.950 [2024-04-17 08:31:32.174677] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.950 [2024-04-17 08:31:32.174756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.950 [2024-04-17 08:31:32.174776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:58.950 [2024-04-17 08:31:32.179000] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.950 [2024-04-17 08:31:32.179076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.950 [2024-04-17 08:31:32.179094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:58.950 [2024-04-17 08:31:32.183267] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.950 [2024-04-17 08:31:32.183353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.950 [2024-04-17 08:31:32.183371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.950 [2024-04-17 08:31:32.187623] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.950 [2024-04-17 08:31:32.187700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.950 [2024-04-17 08:31:32.187720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:58.950 [2024-04-17 08:31:32.191881] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.950 [2024-04-17 08:31:32.191951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.950 [2024-04-17 08:31:32.191970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:58.950 [2024-04-17 08:31:32.196064] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.950 [2024-04-17 08:31:32.196144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.950 [2024-04-17 08:31:32.196167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:58.950 [2024-04-17 08:31:32.200436] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.950 [2024-04-17 08:31:32.200514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.950 [2024-04-17 08:31:32.200533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.950 [2024-04-17 08:31:32.204843] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.950 [2024-04-17 08:31:32.204922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.950 [2024-04-17 08:31:32.204940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:58.950 [2024-04-17 08:31:32.209288] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.950 [2024-04-17 08:31:32.209380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.950 [2024-04-17 08:31:32.209400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:58.950 [2024-04-17 08:31:32.213708] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.950 [2024-04-17 08:31:32.213794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.950 [2024-04-17 08:31:32.213813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:58.950 [2024-04-17 08:31:32.218120] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.950 [2024-04-17 08:31:32.218197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.950 [2024-04-17 08:31:32.218217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.950 [2024-04-17 08:31:32.222534] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.951 [2024-04-17 08:31:32.222618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.951 [2024-04-17 08:31:32.222638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:58.951 [2024-04-17 08:31:32.226896] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.951 [2024-04-17 08:31:32.226981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.951 [2024-04-17 08:31:32.227001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:58.951 [2024-04-17 08:31:32.231337] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.951 [2024-04-17 08:31:32.231416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.951 [2024-04-17 08:31:32.231435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:58.951 [2024-04-17 08:31:32.235615] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.951 [2024-04-17 08:31:32.235689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.951 [2024-04-17 08:31:32.235708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.951 [2024-04-17 08:31:32.239993] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.951 [2024-04-17 08:31:32.240067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.951 [2024-04-17 08:31:32.240085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:58.951 [2024-04-17 08:31:32.244429] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.951 [2024-04-17 08:31:32.244501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.951 [2024-04-17 08:31:32.244520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:58.951 [2024-04-17 08:31:32.248776] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.951 [2024-04-17 08:31:32.248869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.951 [2024-04-17 08:31:32.248887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:58.951 [2024-04-17 08:31:32.253211] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.951 [2024-04-17 08:31:32.253285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.951 [2024-04-17 08:31:32.253305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.951 [2024-04-17 08:31:32.257586] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.951 [2024-04-17 08:31:32.257657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.951 [2024-04-17 08:31:32.257676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:58.951 [2024-04-17 08:31:32.261997] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.951 [2024-04-17 08:31:32.262073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.951 [2024-04-17 08:31:32.262092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:58.951 [2024-04-17 08:31:32.266475] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.951 [2024-04-17 08:31:32.266551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.951 [2024-04-17 08:31:32.266571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:58.951 [2024-04-17 08:31:32.270893] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.951 [2024-04-17 08:31:32.270967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.951 [2024-04-17 08:31:32.270987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.951 [2024-04-17 08:31:32.275348] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.951 [2024-04-17 08:31:32.275421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.951 [2024-04-17 08:31:32.275441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:58.951 [2024-04-17 08:31:32.279830] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:58.951 [2024-04-17 08:31:32.279918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.951 [2024-04-17 08:31:32.279938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.212 [2024-04-17 08:31:32.284287] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.212 [2024-04-17 08:31:32.284397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.212 [2024-04-17 08:31:32.284417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.212 [2024-04-17 08:31:32.288688] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.212 [2024-04-17 08:31:32.288762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.212 [2024-04-17 08:31:32.288782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.212 [2024-04-17 08:31:32.293121] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.212 [2024-04-17 08:31:32.293221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.212 [2024-04-17 08:31:32.293241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.212 [2024-04-17 08:31:32.297579] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.212 [2024-04-17 08:31:32.297674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.212 [2024-04-17 08:31:32.297694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.212 [2024-04-17 08:31:32.302069] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.212 [2024-04-17 08:31:32.302143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.212 [2024-04-17 08:31:32.302162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.212 [2024-04-17 08:31:32.306482] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.212 [2024-04-17 08:31:32.306558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.212 [2024-04-17 08:31:32.306577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.212 [2024-04-17 08:31:32.310848] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.212 [2024-04-17 08:31:32.310925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.212 [2024-04-17 08:31:32.310944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.212 [2024-04-17 08:31:32.315240] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.212 [2024-04-17 08:31:32.315336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.212 [2024-04-17 08:31:32.315356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.212 [2024-04-17 08:31:32.319675] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.212 [2024-04-17 08:31:32.319751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.212 [2024-04-17 08:31:32.319770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.212 [2024-04-17 08:31:32.324076] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.212 [2024-04-17 08:31:32.324152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.212 [2024-04-17 08:31:32.324172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.212 [2024-04-17 08:31:32.328573] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.212 [2024-04-17 08:31:32.328649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.212 [2024-04-17 08:31:32.328669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.212 [2024-04-17 08:31:32.333015] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.212 [2024-04-17 08:31:32.333092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.212 [2024-04-17 08:31:32.333111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.212 [2024-04-17 08:31:32.337443] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.212 [2024-04-17 08:31:32.337516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.212 [2024-04-17 08:31:32.337535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.212 [2024-04-17 08:31:32.341827] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.212 [2024-04-17 08:31:32.341900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.212 [2024-04-17 08:31:32.341918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.212 [2024-04-17 08:31:32.346190] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.212 [2024-04-17 08:31:32.346258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.212 [2024-04-17 08:31:32.346277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.212 [2024-04-17 08:31:32.350551] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.212 [2024-04-17 08:31:32.350650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.212 [2024-04-17 08:31:32.350669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.212 [2024-04-17 08:31:32.355053] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.212 [2024-04-17 08:31:32.355128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.212 [2024-04-17 08:31:32.355147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.212 [2024-04-17 08:31:32.359567] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.212 [2024-04-17 08:31:32.359652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.212 [2024-04-17 08:31:32.359671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.212 [2024-04-17 08:31:32.363951] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.212 [2024-04-17 08:31:32.364023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.212 [2024-04-17 08:31:32.364042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.212 [2024-04-17 08:31:32.368287] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.212 [2024-04-17 08:31:32.368381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.212 [2024-04-17 08:31:32.368400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.212 [2024-04-17 08:31:32.372720] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.212 [2024-04-17 08:31:32.372795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.212 [2024-04-17 08:31:32.372814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.212 [2024-04-17 08:31:32.377123] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.212 [2024-04-17 08:31:32.377199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.212 [2024-04-17 08:31:32.377218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.212 [2024-04-17 08:31:32.381483] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.212 [2024-04-17 08:31:32.381560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.212 [2024-04-17 08:31:32.381579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.212 [2024-04-17 08:31:32.385889] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.212 [2024-04-17 08:31:32.385969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.212 [2024-04-17 08:31:32.385987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.212 [2024-04-17 08:31:32.390324] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.213 [2024-04-17 08:31:32.390398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.213 [2024-04-17 08:31:32.390419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.213 [2024-04-17 08:31:32.394712] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.213 [2024-04-17 08:31:32.394797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.213 [2024-04-17 08:31:32.394818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.213 [2024-04-17 08:31:32.399072] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.213 [2024-04-17 08:31:32.399144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.213 [2024-04-17 08:31:32.399163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.213 [2024-04-17 08:31:32.403529] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.213 [2024-04-17 08:31:32.403606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.213 [2024-04-17 08:31:32.403625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.213 [2024-04-17 08:31:32.407934] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.213 [2024-04-17 08:31:32.408018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.213 [2024-04-17 08:31:32.408037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.213 [2024-04-17 08:31:32.412348] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.213 [2024-04-17 08:31:32.412413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.213 [2024-04-17 08:31:32.412432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.213 [2024-04-17 08:31:32.416767] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.213 [2024-04-17 08:31:32.416852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.213 [2024-04-17 08:31:32.416871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.213 [2024-04-17 08:31:32.421157] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.213 [2024-04-17 08:31:32.421228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.213 [2024-04-17 08:31:32.421247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.213 [2024-04-17 08:31:32.425546] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.213 [2024-04-17 08:31:32.425621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.213 [2024-04-17 08:31:32.425641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.213 [2024-04-17 08:31:32.429864] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.213 [2024-04-17 08:31:32.429950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.213 [2024-04-17 08:31:32.429969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.213 [2024-04-17 08:31:32.434167] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.213 [2024-04-17 08:31:32.434238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.213 [2024-04-17 08:31:32.434256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.213 [2024-04-17 08:31:32.438465] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.213 [2024-04-17 08:31:32.438542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.213 [2024-04-17 08:31:32.438562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.213 [2024-04-17 08:31:32.442746] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.213 [2024-04-17 08:31:32.442820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.213 [2024-04-17 08:31:32.442838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.213 [2024-04-17 08:31:32.446917] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.213 [2024-04-17 08:31:32.446989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.213 [2024-04-17 08:31:32.447007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.213 [2024-04-17 08:31:32.450936] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.213 [2024-04-17 08:31:32.451004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.213 [2024-04-17 08:31:32.451023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.213 [2024-04-17 08:31:32.455219] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.213 [2024-04-17 08:31:32.455294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.213 [2024-04-17 08:31:32.455326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.213 [2024-04-17 08:31:32.459610] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.213 [2024-04-17 08:31:32.459691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.213 [2024-04-17 08:31:32.459710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.213 [2024-04-17 08:31:32.463993] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.213 [2024-04-17 08:31:32.464083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.213 [2024-04-17 08:31:32.464103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.213 [2024-04-17 08:31:32.468442] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.213 [2024-04-17 08:31:32.468539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.213 [2024-04-17 08:31:32.468558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.213 [2024-04-17 08:31:32.472940] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.213 [2024-04-17 08:31:32.473013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.213 [2024-04-17 08:31:32.473032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.213 [2024-04-17 08:31:32.477214] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.213 [2024-04-17 08:31:32.477292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.213 [2024-04-17 08:31:32.477322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.213 [2024-04-17 08:31:32.481584] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.213 [2024-04-17 08:31:32.481655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.213 [2024-04-17 08:31:32.481673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.213 [2024-04-17 08:31:32.485881] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.213 [2024-04-17 08:31:32.485969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.213 [2024-04-17 08:31:32.485989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.213 [2024-04-17 08:31:32.490224] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.213 [2024-04-17 08:31:32.490318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.213 [2024-04-17 08:31:32.490349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.214 [2024-04-17 08:31:32.494663] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.214 [2024-04-17 08:31:32.494744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.214 [2024-04-17 08:31:32.494762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.214 [2024-04-17 08:31:32.498983] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.214 [2024-04-17 08:31:32.499055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.214 [2024-04-17 08:31:32.499073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.214 [2024-04-17 08:31:32.503246] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.214 [2024-04-17 08:31:32.503327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.214 [2024-04-17 08:31:32.503362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.214 [2024-04-17 08:31:32.507538] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.214 [2024-04-17 08:31:32.507608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.214 [2024-04-17 08:31:32.507627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.214 [2024-04-17 08:31:32.512008] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.214 [2024-04-17 08:31:32.512084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.214 [2024-04-17 08:31:32.512103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.214 [2024-04-17 08:31:32.516467] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.214 [2024-04-17 08:31:32.516545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.214 [2024-04-17 08:31:32.516564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.214 [2024-04-17 08:31:32.520862] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.214 [2024-04-17 08:31:32.520941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.214 [2024-04-17 08:31:32.520959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.214 [2024-04-17 08:31:32.525252] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.214 [2024-04-17 08:31:32.525340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.214 [2024-04-17 08:31:32.525359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.214 [2024-04-17 08:31:32.529578] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.214 [2024-04-17 08:31:32.529662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.214 [2024-04-17 08:31:32.529682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.214 [2024-04-17 08:31:32.533875] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.214 [2024-04-17 08:31:32.533961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.214 [2024-04-17 08:31:32.533981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.214 [2024-04-17 08:31:32.538225] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.214 [2024-04-17 08:31:32.538298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.214 [2024-04-17 08:31:32.538328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.214 [2024-04-17 08:31:32.542657] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.214 [2024-04-17 08:31:32.542731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.214 [2024-04-17 08:31:32.542751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.475 [2024-04-17 08:31:32.547107] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.475 [2024-04-17 08:31:32.547251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.475 [2024-04-17 08:31:32.547357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.475 [2024-04-17 08:31:32.551526] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.476 [2024-04-17 08:31:32.551659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.476 [2024-04-17 08:31:32.551748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.476 [2024-04-17 08:31:32.555930] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.476 [2024-04-17 08:31:32.556077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.476 [2024-04-17 08:31:32.556154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.476 [2024-04-17 08:31:32.560459] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.476 [2024-04-17 08:31:32.560612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.476 [2024-04-17 08:31:32.560706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.476 [2024-04-17 08:31:32.564976] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.476 [2024-04-17 08:31:32.565049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.476 [2024-04-17 08:31:32.565069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.476 [2024-04-17 08:31:32.569347] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.476 [2024-04-17 08:31:32.569416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.476 [2024-04-17 08:31:32.569435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.476 [2024-04-17 08:31:32.573648] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.476 [2024-04-17 08:31:32.573736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.476 [2024-04-17 08:31:32.573755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.476 [2024-04-17 08:31:32.578068] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.476 [2024-04-17 08:31:32.578158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.476 [2024-04-17 08:31:32.578180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.476 [2024-04-17 08:31:32.582479] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.476 [2024-04-17 08:31:32.582562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.476 [2024-04-17 08:31:32.582581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.476 [2024-04-17 08:31:32.586917] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.476 [2024-04-17 08:31:32.586999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.476 [2024-04-17 08:31:32.587019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.476 [2024-04-17 08:31:32.591368] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.476 [2024-04-17 08:31:32.591454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.476 [2024-04-17 08:31:32.591473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.476 [2024-04-17 08:31:32.595798] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.476 [2024-04-17 08:31:32.595871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.476 [2024-04-17 08:31:32.595890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.476 [2024-04-17 08:31:32.600207] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.476 [2024-04-17 08:31:32.600281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.476 [2024-04-17 08:31:32.600313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.476 [2024-04-17 08:31:32.604643] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.476 [2024-04-17 08:31:32.604720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.476 [2024-04-17 08:31:32.604740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.476 [2024-04-17 08:31:32.609049] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.476 [2024-04-17 08:31:32.609130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.476 [2024-04-17 08:31:32.609151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.476 [2024-04-17 08:31:32.613575] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.476 [2024-04-17 08:31:32.613656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.476 [2024-04-17 08:31:32.613676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.476 [2024-04-17 08:31:32.618035] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.476 [2024-04-17 08:31:32.618113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.476 [2024-04-17 08:31:32.618133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.476 [2024-04-17 08:31:32.622456] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.476 [2024-04-17 08:31:32.622533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.476 [2024-04-17 08:31:32.622554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.476 [2024-04-17 08:31:32.626936] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.476 [2024-04-17 08:31:32.627010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.476 [2024-04-17 08:31:32.627029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.476 [2024-04-17 08:31:32.631401] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.476 [2024-04-17 08:31:32.631478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.476 [2024-04-17 08:31:32.631498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.476 [2024-04-17 08:31:32.635798] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.476 [2024-04-17 08:31:32.635882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.476 [2024-04-17 08:31:32.635901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.476 [2024-04-17 08:31:32.640257] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.476 [2024-04-17 08:31:32.640346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.476 [2024-04-17 08:31:32.640367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.476 [2024-04-17 08:31:32.644673] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.476 [2024-04-17 08:31:32.644760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.476 [2024-04-17 08:31:32.644780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.476 [2024-04-17 08:31:32.649102] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.476 [2024-04-17 08:31:32.649183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.476 [2024-04-17 08:31:32.649203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.476 [2024-04-17 08:31:32.653542] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.476 [2024-04-17 08:31:32.653619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.476 [2024-04-17 08:31:32.653641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.476 [2024-04-17 08:31:32.657982] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.477 [2024-04-17 08:31:32.658057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.477 [2024-04-17 08:31:32.658077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.477 [2024-04-17 08:31:32.662498] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.477 [2024-04-17 08:31:32.662574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.477 [2024-04-17 08:31:32.662593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.477 [2024-04-17 08:31:32.666845] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.477 [2024-04-17 08:31:32.666923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.477 [2024-04-17 08:31:32.666942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.477 [2024-04-17 08:31:32.671255] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.477 [2024-04-17 08:31:32.671344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.477 [2024-04-17 08:31:32.671362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.477 [2024-04-17 08:31:32.675655] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.477 [2024-04-17 08:31:32.675740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.477 [2024-04-17 08:31:32.675758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.477 [2024-04-17 08:31:32.680090] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.477 [2024-04-17 08:31:32.680164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.477 [2024-04-17 08:31:32.680185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.477 [2024-04-17 08:31:32.684482] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.477 [2024-04-17 08:31:32.684558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.477 [2024-04-17 08:31:32.684576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.477 [2024-04-17 08:31:32.688804] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.477 [2024-04-17 08:31:32.688880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.477 [2024-04-17 08:31:32.688898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.477 [2024-04-17 08:31:32.692880] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.477 [2024-04-17 08:31:32.692963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.477 [2024-04-17 08:31:32.692981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.477 [2024-04-17 08:31:32.697061] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.477 [2024-04-17 08:31:32.697144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.477 [2024-04-17 08:31:32.697163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.477 [2024-04-17 08:31:32.701493] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.477 [2024-04-17 08:31:32.701567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.477 [2024-04-17 08:31:32.701586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.477 [2024-04-17 08:31:32.705837] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.477 [2024-04-17 08:31:32.705912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.477 [2024-04-17 08:31:32.705931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.477 [2024-04-17 08:31:32.710220] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.477 [2024-04-17 08:31:32.710292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.477 [2024-04-17 08:31:32.710311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.477 [2024-04-17 08:31:32.714659] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.477 [2024-04-17 08:31:32.714739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.477 [2024-04-17 08:31:32.714758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.477 [2024-04-17 08:31:32.719085] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.477 [2024-04-17 08:31:32.719171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.477 [2024-04-17 08:31:32.719190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.477 [2024-04-17 08:31:32.723418] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.477 [2024-04-17 08:31:32.723490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.477 [2024-04-17 08:31:32.723509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.477 [2024-04-17 08:31:32.727749] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.477 [2024-04-17 08:31:32.727842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.477 [2024-04-17 08:31:32.727860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.477 [2024-04-17 08:31:32.732176] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.477 [2024-04-17 08:31:32.732256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.477 [2024-04-17 08:31:32.732273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.477 [2024-04-17 08:31:32.736641] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.477 [2024-04-17 08:31:32.736719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.477 [2024-04-17 08:31:32.736738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.477 [2024-04-17 08:31:32.741020] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.477 [2024-04-17 08:31:32.741097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.477 [2024-04-17 08:31:32.741116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.477 [2024-04-17 08:31:32.745437] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.477 [2024-04-17 08:31:32.745513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.477 [2024-04-17 08:31:32.745532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.477 [2024-04-17 08:31:32.749856] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.477 [2024-04-17 08:31:32.749932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.477 [2024-04-17 08:31:32.749951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.477 [2024-04-17 08:31:32.754321] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.477 [2024-04-17 08:31:32.754395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.477 [2024-04-17 08:31:32.754414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.477 [2024-04-17 08:31:32.758739] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.477 [2024-04-17 08:31:32.758813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.477 [2024-04-17 08:31:32.758833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.477 [2024-04-17 08:31:32.763127] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.477 [2024-04-17 08:31:32.763199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.477 [2024-04-17 08:31:32.763218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.477 [2024-04-17 08:31:32.767548] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.477 [2024-04-17 08:31:32.767623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.477 [2024-04-17 08:31:32.767642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.477 [2024-04-17 08:31:32.771941] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.477 [2024-04-17 08:31:32.772029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.477 [2024-04-17 08:31:32.772048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.478 [2024-04-17 08:31:32.776402] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.478 [2024-04-17 08:31:32.776476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.478 [2024-04-17 08:31:32.776495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.478 [2024-04-17 08:31:32.780734] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.478 [2024-04-17 08:31:32.780811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.478 [2024-04-17 08:31:32.780829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.478 [2024-04-17 08:31:32.785138] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.478 [2024-04-17 08:31:32.785227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.478 [2024-04-17 08:31:32.785245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.478 [2024-04-17 08:31:32.789484] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.478 [2024-04-17 08:31:32.789557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.478 [2024-04-17 08:31:32.789574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.478 [2024-04-17 08:31:32.793863] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.478 [2024-04-17 08:31:32.793950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.478 [2024-04-17 08:31:32.793968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.478 [2024-04-17 08:31:32.798232] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.478 [2024-04-17 08:31:32.798338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.478 [2024-04-17 08:31:32.798357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.478 [2024-04-17 08:31:32.802591] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.478 [2024-04-17 08:31:32.802703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.478 [2024-04-17 08:31:32.802721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.739 [2024-04-17 08:31:32.806949] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.739 [2024-04-17 08:31:32.807023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.739 [2024-04-17 08:31:32.807042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.739 [2024-04-17 08:31:32.811184] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.739 [2024-04-17 08:31:32.811260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.739 [2024-04-17 08:31:32.811278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.739 [2024-04-17 08:31:32.815542] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.739 [2024-04-17 08:31:32.815622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.739 [2024-04-17 08:31:32.815640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.739 [2024-04-17 08:31:32.819959] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.739 [2024-04-17 08:31:32.820036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.739 [2024-04-17 08:31:32.820054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.739 [2024-04-17 08:31:32.824308] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.739 [2024-04-17 08:31:32.824392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.739 [2024-04-17 08:31:32.824411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.739 [2024-04-17 08:31:32.828612] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.739 [2024-04-17 08:31:32.828688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.739 [2024-04-17 08:31:32.828705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.739 [2024-04-17 08:31:32.832939] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.739 [2024-04-17 08:31:32.833011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.739 [2024-04-17 08:31:32.833028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.739 [2024-04-17 08:31:32.837265] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.739 [2024-04-17 08:31:32.837347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.739 [2024-04-17 08:31:32.837366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.739 [2024-04-17 08:31:32.841457] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.739 [2024-04-17 08:31:32.841522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.739 [2024-04-17 08:31:32.841539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.739 [2024-04-17 08:31:32.845580] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.739 [2024-04-17 08:31:32.845662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.739 [2024-04-17 08:31:32.845682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.739 [2024-04-17 08:31:32.849989] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.739 [2024-04-17 08:31:32.850067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.739 [2024-04-17 08:31:32.850086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.739 [2024-04-17 08:31:32.854439] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.739 [2024-04-17 08:31:32.854520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.739 [2024-04-17 08:31:32.854539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.739 [2024-04-17 08:31:32.858664] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.739 [2024-04-17 08:31:32.858753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.739 [2024-04-17 08:31:32.858772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.739 [2024-04-17 08:31:32.862913] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.739 [2024-04-17 08:31:32.862995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.739 [2024-04-17 08:31:32.863014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.739 [2024-04-17 08:31:32.867149] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.739 [2024-04-17 08:31:32.867225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.739 [2024-04-17 08:31:32.867243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.739 [2024-04-17 08:31:32.871436] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.739 [2024-04-17 08:31:32.871509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.739 [2024-04-17 08:31:32.871528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.739 [2024-04-17 08:31:32.875696] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.739 [2024-04-17 08:31:32.875770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.739 [2024-04-17 08:31:32.875788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.739 [2024-04-17 08:31:32.880153] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.739 [2024-04-17 08:31:32.880236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.739 [2024-04-17 08:31:32.880255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.739 [2024-04-17 08:31:32.884596] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.739 [2024-04-17 08:31:32.884680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.739 [2024-04-17 08:31:32.884699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.739 [2024-04-17 08:31:32.888998] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.739 [2024-04-17 08:31:32.889068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.739 [2024-04-17 08:31:32.889088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.739 [2024-04-17 08:31:32.893353] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.739 [2024-04-17 08:31:32.893424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.739 [2024-04-17 08:31:32.893442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.739 [2024-04-17 08:31:32.897584] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.739 [2024-04-17 08:31:32.897665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.739 [2024-04-17 08:31:32.897682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.739 [2024-04-17 08:31:32.901740] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.740 [2024-04-17 08:31:32.901810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.740 [2024-04-17 08:31:32.901827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.740 [2024-04-17 08:31:32.905662] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.740 [2024-04-17 08:31:32.905725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.740 [2024-04-17 08:31:32.905741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.740 [2024-04-17 08:31:32.909685] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.740 [2024-04-17 08:31:32.909754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.740 [2024-04-17 08:31:32.909770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.740 [2024-04-17 08:31:32.913636] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.740 [2024-04-17 08:31:32.913723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.740 [2024-04-17 08:31:32.913741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.740 [2024-04-17 08:31:32.917801] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.740 [2024-04-17 08:31:32.917872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.740 [2024-04-17 08:31:32.917890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.740 [2024-04-17 08:31:32.922176] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.740 [2024-04-17 08:31:32.922246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.740 [2024-04-17 08:31:32.922264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.740 [2024-04-17 08:31:32.926457] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.740 [2024-04-17 08:31:32.926528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.740 [2024-04-17 08:31:32.926546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.740 [2024-04-17 08:31:32.930571] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.740 [2024-04-17 08:31:32.930674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.740 [2024-04-17 08:31:32.930692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.740 [2024-04-17 08:31:32.934754] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.740 [2024-04-17 08:31:32.934823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.740 [2024-04-17 08:31:32.934841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.740 [2024-04-17 08:31:32.938945] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.740 [2024-04-17 08:31:32.939015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.740 [2024-04-17 08:31:32.939033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.740 [2024-04-17 08:31:32.943030] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.740 [2024-04-17 08:31:32.943102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.740 [2024-04-17 08:31:32.943119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.740 [2024-04-17 08:31:32.947296] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.740 [2024-04-17 08:31:32.947382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.740 [2024-04-17 08:31:32.947402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.740 [2024-04-17 08:31:32.951706] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.740 [2024-04-17 08:31:32.951781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.740 [2024-04-17 08:31:32.951801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.740 [2024-04-17 08:31:32.956097] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.740 [2024-04-17 08:31:32.956180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.740 [2024-04-17 08:31:32.956199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.740 [2024-04-17 08:31:32.960515] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.740 [2024-04-17 08:31:32.960602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.740 [2024-04-17 08:31:32.960622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.740 [2024-04-17 08:31:32.964970] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.740 [2024-04-17 08:31:32.965051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.740 [2024-04-17 08:31:32.965071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.740 [2024-04-17 08:31:32.969375] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.740 [2024-04-17 08:31:32.969454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.740 [2024-04-17 08:31:32.969473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.740 [2024-04-17 08:31:32.973788] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.740 [2024-04-17 08:31:32.973868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.740 [2024-04-17 08:31:32.973886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.740 [2024-04-17 08:31:32.978198] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.740 [2024-04-17 08:31:32.978274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.740 [2024-04-17 08:31:32.978293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.740 [2024-04-17 08:31:32.982687] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.740 [2024-04-17 08:31:32.982761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.740 [2024-04-17 08:31:32.982781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.740 [2024-04-17 08:31:32.987014] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.740 [2024-04-17 08:31:32.987100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.740 [2024-04-17 08:31:32.987118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.740 [2024-04-17 08:31:32.991521] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.740 [2024-04-17 08:31:32.991637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.740 [2024-04-17 08:31:32.991659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.740 [2024-04-17 08:31:32.995959] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.740 [2024-04-17 08:31:32.996043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.740 [2024-04-17 08:31:32.996063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.740 [2024-04-17 08:31:33.000356] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.740 [2024-04-17 08:31:33.000435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.740 [2024-04-17 08:31:33.000455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.740 [2024-04-17 08:31:33.004758] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.740 [2024-04-17 08:31:33.004833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.740 [2024-04-17 08:31:33.004852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.740 [2024-04-17 08:31:33.009096] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.740 [2024-04-17 08:31:33.009169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.740 [2024-04-17 08:31:33.009187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.740 [2024-04-17 08:31:33.013447] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.740 [2024-04-17 08:31:33.013519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.740 [2024-04-17 08:31:33.013538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.741 [2024-04-17 08:31:33.017759] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.741 [2024-04-17 08:31:33.017841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.741 [2024-04-17 08:31:33.017860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.741 [2024-04-17 08:31:33.022179] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.741 [2024-04-17 08:31:33.022256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.741 [2024-04-17 08:31:33.022274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.741 [2024-04-17 08:31:33.026640] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.741 [2024-04-17 08:31:33.026715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.741 [2024-04-17 08:31:33.026735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.741 [2024-04-17 08:31:33.030987] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.741 [2024-04-17 08:31:33.031060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.741 [2024-04-17 08:31:33.031078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.741 [2024-04-17 08:31:33.035378] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.741 [2024-04-17 08:31:33.035451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.741 [2024-04-17 08:31:33.035470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.741 [2024-04-17 08:31:33.039642] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.741 [2024-04-17 08:31:33.039713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.741 [2024-04-17 08:31:33.039732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.741 [2024-04-17 08:31:33.043918] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.741 [2024-04-17 08:31:33.044005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.741 [2024-04-17 08:31:33.044024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.741 [2024-04-17 08:31:33.048430] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.741 [2024-04-17 08:31:33.048506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.741 [2024-04-17 08:31:33.048525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.741 [2024-04-17 08:31:33.052820] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.741 [2024-04-17 08:31:33.052896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.741 [2024-04-17 08:31:33.052914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.741 [2024-04-17 08:31:33.057263] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.741 [2024-04-17 08:31:33.057358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.741 [2024-04-17 08:31:33.057377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.741 [2024-04-17 08:31:33.061674] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.741 [2024-04-17 08:31:33.061754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.741 [2024-04-17 08:31:33.061773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.741 [2024-04-17 08:31:33.066133] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:34:59.741 [2024-04-17 08:31:33.066203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.741 [2024-04-17 08:31:33.066223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.050 [2024-04-17 08:31:33.070463] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.050 [2024-04-17 08:31:33.070533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.050 [2024-04-17 08:31:33.070552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.050 [2024-04-17 08:31:33.074904] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.050 [2024-04-17 08:31:33.074977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.050 [2024-04-17 08:31:33.074997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.050 [2024-04-17 08:31:33.079347] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.050 [2024-04-17 08:31:33.079423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.050 [2024-04-17 08:31:33.079443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.050 [2024-04-17 08:31:33.083712] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.050 [2024-04-17 08:31:33.083793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.050 [2024-04-17 08:31:33.083813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.050 [2024-04-17 08:31:33.088132] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.050 [2024-04-17 08:31:33.088212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.050 [2024-04-17 08:31:33.088232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.050 [2024-04-17 08:31:33.092575] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.050 [2024-04-17 08:31:33.092654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.050 [2024-04-17 08:31:33.092674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.050 [2024-04-17 08:31:33.097094] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.050 [2024-04-17 08:31:33.097178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.050 [2024-04-17 08:31:33.097198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.050 [2024-04-17 08:31:33.101568] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.050 [2024-04-17 08:31:33.101726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.050 [2024-04-17 08:31:33.101764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.050 [2024-04-17 08:31:33.106722] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.050 [2024-04-17 08:31:33.106827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.050 [2024-04-17 08:31:33.106853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.050 [2024-04-17 08:31:33.111300] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.050 [2024-04-17 08:31:33.111404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.050 [2024-04-17 08:31:33.111428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.050 [2024-04-17 08:31:33.116083] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.050 [2024-04-17 08:31:33.116170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.050 [2024-04-17 08:31:33.116193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.050 [2024-04-17 08:31:33.120620] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.050 [2024-04-17 08:31:33.120701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.050 [2024-04-17 08:31:33.120722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.050 [2024-04-17 08:31:33.125190] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.050 [2024-04-17 08:31:33.125272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.050 [2024-04-17 08:31:33.125296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.050 [2024-04-17 08:31:33.129771] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.050 [2024-04-17 08:31:33.129861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.050 [2024-04-17 08:31:33.129884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.050 [2024-04-17 08:31:33.134240] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.050 [2024-04-17 08:31:33.134336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.050 [2024-04-17 08:31:33.134358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.050 [2024-04-17 08:31:33.138702] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.050 [2024-04-17 08:31:33.138780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.050 [2024-04-17 08:31:33.138801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.050 [2024-04-17 08:31:33.143229] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.050 [2024-04-17 08:31:33.143332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.050 [2024-04-17 08:31:33.143352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.050 [2024-04-17 08:31:33.147664] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.050 [2024-04-17 08:31:33.147737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.050 [2024-04-17 08:31:33.147757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.050 [2024-04-17 08:31:33.152080] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.050 [2024-04-17 08:31:33.152156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.050 [2024-04-17 08:31:33.152175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.050 [2024-04-17 08:31:33.156486] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.050 [2024-04-17 08:31:33.156562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.050 [2024-04-17 08:31:33.156582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.050 [2024-04-17 08:31:33.160907] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.050 [2024-04-17 08:31:33.160985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.050 [2024-04-17 08:31:33.161005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.050 [2024-04-17 08:31:33.165298] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.050 [2024-04-17 08:31:33.165383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.050 [2024-04-17 08:31:33.165402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.051 [2024-04-17 08:31:33.169773] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.051 [2024-04-17 08:31:33.169863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.051 [2024-04-17 08:31:33.169882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.051 [2024-04-17 08:31:33.174168] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.051 [2024-04-17 08:31:33.174248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.051 [2024-04-17 08:31:33.174267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.051 [2024-04-17 08:31:33.178684] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.051 [2024-04-17 08:31:33.178769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.051 [2024-04-17 08:31:33.178789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.051 [2024-04-17 08:31:33.183094] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.051 [2024-04-17 08:31:33.183169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.051 [2024-04-17 08:31:33.183189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.051 [2024-04-17 08:31:33.187589] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.051 [2024-04-17 08:31:33.187666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.051 [2024-04-17 08:31:33.187684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.051 [2024-04-17 08:31:33.191969] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.051 [2024-04-17 08:31:33.192046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.051 [2024-04-17 08:31:33.192064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.051 [2024-04-17 08:31:33.196419] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.051 [2024-04-17 08:31:33.196500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.051 [2024-04-17 08:31:33.196519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.051 [2024-04-17 08:31:33.200788] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.051 [2024-04-17 08:31:33.200861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.051 [2024-04-17 08:31:33.200881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.051 [2024-04-17 08:31:33.205288] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.051 [2024-04-17 08:31:33.205381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.051 [2024-04-17 08:31:33.205400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.051 [2024-04-17 08:31:33.209735] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.051 [2024-04-17 08:31:33.209819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.051 [2024-04-17 08:31:33.209838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.051 [2024-04-17 08:31:33.214284] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.051 [2024-04-17 08:31:33.214378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.051 [2024-04-17 08:31:33.214397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.051 [2024-04-17 08:31:33.218692] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.051 [2024-04-17 08:31:33.218782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.051 [2024-04-17 08:31:33.218802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.051 [2024-04-17 08:31:33.223198] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.051 [2024-04-17 08:31:33.223280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.051 [2024-04-17 08:31:33.223300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.051 [2024-04-17 08:31:33.227560] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.051 [2024-04-17 08:31:33.227638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.051 [2024-04-17 08:31:33.227659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.051 [2024-04-17 08:31:33.231908] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.051 [2024-04-17 08:31:33.231975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.051 [2024-04-17 08:31:33.231993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.051 [2024-04-17 08:31:33.236035] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.051 [2024-04-17 08:31:33.236115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.051 [2024-04-17 08:31:33.236133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.051 [2024-04-17 08:31:33.240263] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.051 [2024-04-17 08:31:33.240366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.051 [2024-04-17 08:31:33.240384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.051 [2024-04-17 08:31:33.244541] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.051 [2024-04-17 08:31:33.244629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.051 [2024-04-17 08:31:33.244648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.051 [2024-04-17 08:31:33.248901] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.051 [2024-04-17 08:31:33.248970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.051 [2024-04-17 08:31:33.248988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.051 [2024-04-17 08:31:33.253243] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.051 [2024-04-17 08:31:33.253325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.051 [2024-04-17 08:31:33.253360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.051 [2024-04-17 08:31:33.257600] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.051 [2024-04-17 08:31:33.257670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.051 [2024-04-17 08:31:33.257688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.051 [2024-04-17 08:31:33.261989] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.051 [2024-04-17 08:31:33.262065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.051 [2024-04-17 08:31:33.262085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.051 [2024-04-17 08:31:33.266386] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.051 [2024-04-17 08:31:33.266463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.051 [2024-04-17 08:31:33.266482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.051 [2024-04-17 08:31:33.270778] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.051 [2024-04-17 08:31:33.270862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.051 [2024-04-17 08:31:33.270881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.052 [2024-04-17 08:31:33.275154] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.052 [2024-04-17 08:31:33.275235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.052 [2024-04-17 08:31:33.275254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.052 [2024-04-17 08:31:33.279485] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.052 [2024-04-17 08:31:33.279560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.052 [2024-04-17 08:31:33.279579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.052 [2024-04-17 08:31:33.283678] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.052 [2024-04-17 08:31:33.283758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.052 [2024-04-17 08:31:33.283776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.052 [2024-04-17 08:31:33.287867] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.052 [2024-04-17 08:31:33.287938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.052 [2024-04-17 08:31:33.287954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.052 [2024-04-17 08:31:33.291959] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.052 [2024-04-17 08:31:33.292029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.052 [2024-04-17 08:31:33.292046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.052 [2024-04-17 08:31:33.296044] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.052 [2024-04-17 08:31:33.296104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.052 [2024-04-17 08:31:33.296120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.052 [2024-04-17 08:31:33.300164] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.052 [2024-04-17 08:31:33.300246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.052 [2024-04-17 08:31:33.300273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.052 [2024-04-17 08:31:33.304387] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.052 [2024-04-17 08:31:33.304478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.052 [2024-04-17 08:31:33.304497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.052 [2024-04-17 08:31:33.308290] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.052 [2024-04-17 08:31:33.308386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.052 [2024-04-17 08:31:33.308404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.052 [2024-04-17 08:31:33.312196] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.052 [2024-04-17 08:31:33.312260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.052 [2024-04-17 08:31:33.312276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.052 [2024-04-17 08:31:33.316117] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.052 [2024-04-17 08:31:33.316179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.052 [2024-04-17 08:31:33.316196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.052 [2024-04-17 08:31:33.320078] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.052 [2024-04-17 08:31:33.320140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.052 [2024-04-17 08:31:33.320156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.052 [2024-04-17 08:31:33.324002] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.052 [2024-04-17 08:31:33.324064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.052 [2024-04-17 08:31:33.324080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.052 [2024-04-17 08:31:33.328054] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.052 [2024-04-17 08:31:33.328122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.052 [2024-04-17 08:31:33.328138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.052 [2024-04-17 08:31:33.332153] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.052 [2024-04-17 08:31:33.332222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.052 [2024-04-17 08:31:33.332238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.052 [2024-04-17 08:31:33.336206] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.052 [2024-04-17 08:31:33.336276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.052 [2024-04-17 08:31:33.336294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.052 [2024-04-17 08:31:33.340372] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.052 [2024-04-17 08:31:33.340442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.052 [2024-04-17 08:31:33.340459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.052 [2024-04-17 08:31:33.344485] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.052 [2024-04-17 08:31:33.344560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.052 [2024-04-17 08:31:33.344578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.052 [2024-04-17 08:31:33.348558] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.052 [2024-04-17 08:31:33.348638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.052 [2024-04-17 08:31:33.348656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.052 [2024-04-17 08:31:33.352674] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.052 [2024-04-17 08:31:33.352737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.052 [2024-04-17 08:31:33.352771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.052 [2024-04-17 08:31:33.356877] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.052 [2024-04-17 08:31:33.356945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.052 [2024-04-17 08:31:33.356963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.328 [2024-04-17 08:31:33.361235] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.328 [2024-04-17 08:31:33.361310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.328 [2024-04-17 08:31:33.361340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.328 [2024-04-17 08:31:33.365616] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.328 [2024-04-17 08:31:33.365691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.328 [2024-04-17 08:31:33.365709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.328 [2024-04-17 08:31:33.369903] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.328 [2024-04-17 08:31:33.369969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.328 [2024-04-17 08:31:33.369988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.328 [2024-04-17 08:31:33.374191] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.328 [2024-04-17 08:31:33.374265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.328 [2024-04-17 08:31:33.374283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.328 [2024-04-17 08:31:33.378394] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.328 [2024-04-17 08:31:33.378466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.328 [2024-04-17 08:31:33.378484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.328 [2024-04-17 08:31:33.382454] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.328 [2024-04-17 08:31:33.382514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.328 [2024-04-17 08:31:33.382530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.328 [2024-04-17 08:31:33.386674] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.328 [2024-04-17 08:31:33.386742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.328 [2024-04-17 08:31:33.386760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.328 [2024-04-17 08:31:33.390921] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.328 [2024-04-17 08:31:33.390996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.328 [2024-04-17 08:31:33.391014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.328 [2024-04-17 08:31:33.395032] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.328 [2024-04-17 08:31:33.395101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.328 [2024-04-17 08:31:33.395119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.328 [2024-04-17 08:31:33.399277] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.328 [2024-04-17 08:31:33.399365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.328 [2024-04-17 08:31:33.399384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.328 [2024-04-17 08:31:33.403460] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.328 [2024-04-17 08:31:33.403528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.328 [2024-04-17 08:31:33.403546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.328 [2024-04-17 08:31:33.407837] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.328 [2024-04-17 08:31:33.407909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.328 [2024-04-17 08:31:33.407929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.328 [2024-04-17 08:31:33.412126] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.328 [2024-04-17 08:31:33.412203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.328 [2024-04-17 08:31:33.412221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.328 [2024-04-17 08:31:33.416502] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.328 [2024-04-17 08:31:33.416574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.328 [2024-04-17 08:31:33.416592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.328 [2024-04-17 08:31:33.420814] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.328 [2024-04-17 08:31:33.420902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.328 [2024-04-17 08:31:33.420920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.328 [2024-04-17 08:31:33.425205] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.328 [2024-04-17 08:31:33.425295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.328 [2024-04-17 08:31:33.425314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.328 [2024-04-17 08:31:33.429522] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.328 [2024-04-17 08:31:33.429616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.328 [2024-04-17 08:31:33.429635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.328 [2024-04-17 08:31:33.433827] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.328 [2024-04-17 08:31:33.433897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.328 [2024-04-17 08:31:33.433915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.328 [2024-04-17 08:31:33.438118] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.329 [2024-04-17 08:31:33.438198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.329 [2024-04-17 08:31:33.438216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.329 [2024-04-17 08:31:33.442153] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.329 [2024-04-17 08:31:33.442218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.329 [2024-04-17 08:31:33.442235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.329 [2024-04-17 08:31:33.446096] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.329 [2024-04-17 08:31:33.446162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.329 [2024-04-17 08:31:33.446179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.329 [2024-04-17 08:31:33.450223] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.329 [2024-04-17 08:31:33.450285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.329 [2024-04-17 08:31:33.450319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.329 [2024-04-17 08:31:33.454337] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.329 [2024-04-17 08:31:33.454396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.329 [2024-04-17 08:31:33.454413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.329 [2024-04-17 08:31:33.458275] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.329 [2024-04-17 08:31:33.458368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.329 [2024-04-17 08:31:33.458385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.329 [2024-04-17 08:31:33.462170] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.329 [2024-04-17 08:31:33.462233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.329 [2024-04-17 08:31:33.462250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.329 [2024-04-17 08:31:33.466238] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.329 [2024-04-17 08:31:33.466307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.329 [2024-04-17 08:31:33.466338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.329 [2024-04-17 08:31:33.470314] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.329 [2024-04-17 08:31:33.470392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.329 [2024-04-17 08:31:33.470411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.329 [2024-04-17 08:31:33.474307] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.329 [2024-04-17 08:31:33.474381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.329 [2024-04-17 08:31:33.474397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.329 [2024-04-17 08:31:33.478365] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.329 [2024-04-17 08:31:33.478423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.329 [2024-04-17 08:31:33.478440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.329 [2024-04-17 08:31:33.482420] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.329 [2024-04-17 08:31:33.482508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.329 [2024-04-17 08:31:33.482526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.329 [2024-04-17 08:31:33.486567] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.329 [2024-04-17 08:31:33.486639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.329 [2024-04-17 08:31:33.486672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.329 [2024-04-17 08:31:33.490713] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.329 [2024-04-17 08:31:33.490781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.329 [2024-04-17 08:31:33.490799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.329 [2024-04-17 08:31:33.495021] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.329 [2024-04-17 08:31:33.495104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.329 [2024-04-17 08:31:33.495122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.329 [2024-04-17 08:31:33.499362] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.329 [2024-04-17 08:31:33.499431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.329 [2024-04-17 08:31:33.499449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.329 [2024-04-17 08:31:33.503559] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.329 [2024-04-17 08:31:33.503629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.329 [2024-04-17 08:31:33.503648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.329 [2024-04-17 08:31:33.507688] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.329 [2024-04-17 08:31:33.507760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.329 [2024-04-17 08:31:33.507778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.329 [2024-04-17 08:31:33.511952] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.329 [2024-04-17 08:31:33.512025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.329 [2024-04-17 08:31:33.512044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.329 [2024-04-17 08:31:33.516125] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.329 [2024-04-17 08:31:33.516194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.329 [2024-04-17 08:31:33.516211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.329 [2024-04-17 08:31:33.520372] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.329 [2024-04-17 08:31:33.520431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.329 [2024-04-17 08:31:33.520448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.329 [2024-04-17 08:31:33.524185] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.329 [2024-04-17 08:31:33.524248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.329 [2024-04-17 08:31:33.524265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.329 [2024-04-17 08:31:33.528265] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf9dd00) with pdu=0x2000190fef90 00:35:00.329 [2024-04-17 08:31:33.528349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.329 [2024-04-17 08:31:33.528392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.329 00:35:00.329 Latency(us) 00:35:00.329 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:00.329 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:35:00.329 nvme0n1 : 2.00 7102.11 887.76 0.00 0.00 2249.12 1438.07 5408.87 00:35:00.330 =================================================================================================================== 00:35:00.330 Total : 7102.11 887.76 0.00 0.00 2249.12 1438.07 5408.87 00:35:00.330 0 00:35:00.330 08:31:33 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:00.330 08:31:33 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:00.330 08:31:33 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:00.330 08:31:33 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:00.330 | .driver_specific 00:35:00.330 | .nvme_error 00:35:00.330 | .status_code 00:35:00.330 | .command_transient_transport_error' 00:35:00.589 08:31:33 -- host/digest.sh@71 -- # (( 458 > 0 )) 00:35:00.589 08:31:33 -- host/digest.sh@73 -- # killprocess 72083 00:35:00.590 08:31:33 -- common/autotest_common.sh@926 -- # '[' -z 72083 ']' 00:35:00.590 08:31:33 -- common/autotest_common.sh@930 -- # kill -0 72083 00:35:00.590 08:31:33 -- common/autotest_common.sh@931 -- # uname 00:35:00.590 08:31:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:35:00.590 08:31:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 72083 00:35:00.590 killing process with pid 72083 00:35:00.590 Received shutdown signal, test time was about 2.000000 seconds 00:35:00.590 00:35:00.590 Latency(us) 00:35:00.590 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:00.590 =================================================================================================================== 00:35:00.590 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:00.590 08:31:33 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:35:00.590 08:31:33 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:35:00.590 08:31:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 72083' 00:35:00.590 08:31:33 -- common/autotest_common.sh@945 -- # kill 72083 00:35:00.590 08:31:33 -- common/autotest_common.sh@950 -- # wait 72083 00:35:00.848 08:31:34 -- host/digest.sh@115 -- # killprocess 71874 00:35:00.848 08:31:34 -- common/autotest_common.sh@926 -- # '[' -z 71874 ']' 00:35:00.848 08:31:34 -- common/autotest_common.sh@930 -- # kill -0 71874 00:35:00.848 08:31:34 -- common/autotest_common.sh@931 -- # uname 00:35:00.848 08:31:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:35:00.848 08:31:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71874 00:35:00.848 killing process with pid 71874 00:35:00.848 08:31:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:35:00.848 08:31:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:35:00.848 08:31:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71874' 00:35:00.848 08:31:34 -- common/autotest_common.sh@945 -- # kill 71874 00:35:00.848 08:31:34 -- common/autotest_common.sh@950 -- # wait 71874 00:35:01.111 ************************************ 00:35:01.111 END TEST nvmf_digest_error 00:35:01.111 ************************************ 00:35:01.111 00:35:01.111 real 0m17.925s 00:35:01.111 user 0m34.465s 00:35:01.111 sys 0m4.524s 00:35:01.111 08:31:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:01.111 08:31:34 -- common/autotest_common.sh@10 -- # set +x 00:35:01.111 08:31:34 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:35:01.111 08:31:34 -- host/digest.sh@139 -- # nvmftestfini 00:35:01.111 08:31:34 -- nvmf/common.sh@476 -- # nvmfcleanup 00:35:01.111 08:31:34 -- nvmf/common.sh@116 -- # sync 00:35:01.111 08:31:34 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:35:01.111 08:31:34 -- nvmf/common.sh@119 -- # set +e 00:35:01.111 08:31:34 -- nvmf/common.sh@120 -- # for i in {1..20} 00:35:01.111 08:31:34 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:35:01.111 rmmod nvme_tcp 00:35:01.111 rmmod nvme_fabrics 00:35:01.111 rmmod nvme_keyring 00:35:01.370 08:31:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:35:01.370 Process with pid 71874 is not found 00:35:01.370 08:31:34 -- nvmf/common.sh@123 -- # set -e 00:35:01.370 08:31:34 -- nvmf/common.sh@124 -- # return 0 00:35:01.370 08:31:34 -- nvmf/common.sh@477 -- # '[' -n 71874 ']' 00:35:01.370 08:31:34 -- nvmf/common.sh@478 -- # killprocess 71874 00:35:01.370 08:31:34 -- common/autotest_common.sh@926 -- # '[' -z 71874 ']' 00:35:01.370 08:31:34 -- common/autotest_common.sh@930 -- # kill -0 71874 00:35:01.371 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (71874) - No such process 00:35:01.371 08:31:34 -- common/autotest_common.sh@953 -- # echo 'Process with pid 71874 is not found' 00:35:01.371 08:31:34 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:35:01.371 08:31:34 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:35:01.371 08:31:34 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:35:01.371 08:31:34 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:01.371 08:31:34 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:35:01.371 08:31:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:01.371 08:31:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:01.371 08:31:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:01.371 08:31:34 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:35:01.371 00:35:01.371 real 0m36.971s 00:35:01.371 user 1m9.437s 00:35:01.371 sys 0m9.484s 00:35:01.371 08:31:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:01.371 08:31:34 -- common/autotest_common.sh@10 -- # set +x 00:35:01.371 ************************************ 00:35:01.371 END TEST nvmf_digest 00:35:01.371 ************************************ 00:35:01.371 08:31:34 -- nvmf/nvmf.sh@109 -- # [[ 0 -eq 1 ]] 00:35:01.371 08:31:34 -- nvmf/nvmf.sh@114 -- # [[ 1 -eq 1 ]] 00:35:01.371 08:31:34 -- nvmf/nvmf.sh@115 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:35:01.371 08:31:34 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:35:01.371 08:31:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:35:01.371 08:31:34 -- common/autotest_common.sh@10 -- # set +x 00:35:01.371 ************************************ 00:35:01.371 START TEST nvmf_multipath 00:35:01.371 ************************************ 00:35:01.371 08:31:34 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:35:01.371 * Looking for test storage... 00:35:01.371 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:35:01.371 08:31:34 -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:35:01.371 08:31:34 -- nvmf/common.sh@7 -- # uname -s 00:35:01.371 08:31:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:01.371 08:31:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:01.371 08:31:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:01.371 08:31:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:01.371 08:31:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:01.371 08:31:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:01.371 08:31:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:01.371 08:31:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:01.371 08:31:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:01.371 08:31:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:01.371 08:31:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ce38300-f67f-48af-81f9-d51a7c54746d 00:35:01.371 08:31:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ce38300-f67f-48af-81f9-d51a7c54746d 00:35:01.371 08:31:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:01.371 08:31:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:01.371 08:31:34 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:35:01.371 08:31:34 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:35:01.371 08:31:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:01.371 08:31:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:01.371 08:31:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:01.371 08:31:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:01.371 08:31:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:01.371 08:31:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:01.371 08:31:34 -- paths/export.sh@5 -- # export PATH 00:35:01.371 08:31:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:01.371 08:31:34 -- nvmf/common.sh@46 -- # : 0 00:35:01.371 08:31:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:35:01.371 08:31:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:35:01.371 08:31:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:35:01.371 08:31:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:01.371 08:31:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:01.371 08:31:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:35:01.371 08:31:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:35:01.371 08:31:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:35:01.371 08:31:34 -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:01.371 08:31:34 -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:01.371 08:31:34 -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:01.371 08:31:34 -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:35:01.371 08:31:34 -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:35:01.371 08:31:34 -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:35:01.371 08:31:34 -- host/multipath.sh@30 -- # nvmftestinit 00:35:01.371 08:31:34 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:35:01.371 08:31:34 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:01.371 08:31:34 -- nvmf/common.sh@436 -- # prepare_net_devs 00:35:01.371 08:31:34 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:35:01.371 08:31:34 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:35:01.371 08:31:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:01.371 08:31:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:01.371 08:31:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:01.371 08:31:34 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:35:01.371 08:31:34 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:35:01.371 08:31:34 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:35:01.371 08:31:34 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:35:01.371 08:31:34 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:35:01.371 08:31:34 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:35:01.371 08:31:34 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:01.371 08:31:34 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:01.371 08:31:34 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:35:01.371 08:31:34 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:35:01.371 08:31:34 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:35:01.371 08:31:34 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:35:01.371 08:31:34 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:35:01.371 08:31:34 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:01.371 08:31:34 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:35:01.371 08:31:34 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:35:01.371 08:31:34 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:35:01.371 08:31:34 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:35:01.371 08:31:34 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:35:01.371 08:31:34 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:35:01.371 Cannot find device "nvmf_tgt_br" 00:35:01.371 08:31:34 -- nvmf/common.sh@154 -- # true 00:35:01.371 08:31:34 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:35:01.371 Cannot find device "nvmf_tgt_br2" 00:35:01.371 08:31:34 -- nvmf/common.sh@155 -- # true 00:35:01.371 08:31:34 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:35:01.371 08:31:34 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:35:01.630 Cannot find device "nvmf_tgt_br" 00:35:01.630 08:31:34 -- nvmf/common.sh@157 -- # true 00:35:01.630 08:31:34 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:35:01.630 Cannot find device "nvmf_tgt_br2" 00:35:01.630 08:31:34 -- nvmf/common.sh@158 -- # true 00:35:01.630 08:31:34 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:35:01.630 08:31:34 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:35:01.630 08:31:34 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:35:01.630 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:35:01.630 08:31:34 -- nvmf/common.sh@161 -- # true 00:35:01.630 08:31:34 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:35:01.630 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:35:01.630 08:31:34 -- nvmf/common.sh@162 -- # true 00:35:01.630 08:31:34 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:35:01.630 08:31:34 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:35:01.630 08:31:34 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:35:01.630 08:31:34 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:35:01.630 08:31:34 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:35:01.630 08:31:34 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:35:01.630 08:31:34 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:35:01.630 08:31:34 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:35:01.630 08:31:34 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:35:01.630 08:31:34 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:35:01.630 08:31:34 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:35:01.630 08:31:34 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:35:01.630 08:31:34 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:35:01.630 08:31:34 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:35:01.630 08:31:34 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:35:01.631 08:31:34 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:35:01.631 08:31:34 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:35:01.631 08:31:34 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:35:01.631 08:31:34 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:35:01.631 08:31:34 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:35:01.631 08:31:34 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:35:01.631 08:31:34 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:35:01.631 08:31:34 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:35:01.631 08:31:34 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:35:01.631 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:01.631 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.106 ms 00:35:01.631 00:35:01.631 --- 10.0.0.2 ping statistics --- 00:35:01.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:01.631 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:35:01.631 08:31:34 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:35:01.631 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:35:01.631 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.100 ms 00:35:01.631 00:35:01.631 --- 10.0.0.3 ping statistics --- 00:35:01.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:01.631 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:35:01.631 08:31:34 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:35:01.889 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:01.889 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:35:01.889 00:35:01.889 --- 10.0.0.1 ping statistics --- 00:35:01.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:01.890 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:35:01.890 08:31:34 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:01.890 08:31:34 -- nvmf/common.sh@421 -- # return 0 00:35:01.890 08:31:34 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:35:01.890 08:31:34 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:01.890 08:31:34 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:35:01.890 08:31:34 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:35:01.890 08:31:34 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:01.890 08:31:34 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:35:01.890 08:31:34 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:35:01.890 08:31:34 -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:35:01.890 08:31:34 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:35:01.890 08:31:34 -- common/autotest_common.sh@712 -- # xtrace_disable 00:35:01.890 08:31:34 -- common/autotest_common.sh@10 -- # set +x 00:35:01.890 08:31:34 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:35:01.890 08:31:34 -- nvmf/common.sh@469 -- # nvmfpid=72351 00:35:01.890 08:31:34 -- nvmf/common.sh@470 -- # waitforlisten 72351 00:35:01.890 08:31:34 -- common/autotest_common.sh@819 -- # '[' -z 72351 ']' 00:35:01.890 08:31:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:01.890 08:31:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:35:01.890 08:31:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:01.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:01.890 08:31:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:35:01.890 08:31:34 -- common/autotest_common.sh@10 -- # set +x 00:35:01.890 [2024-04-17 08:31:35.045130] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:35:01.890 [2024-04-17 08:31:35.045189] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:01.890 [2024-04-17 08:31:35.169806] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:02.149 [2024-04-17 08:31:35.277970] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:35:02.149 [2024-04-17 08:31:35.278115] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:02.149 [2024-04-17 08:31:35.278123] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:02.149 [2024-04-17 08:31:35.278129] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:02.149 [2024-04-17 08:31:35.278467] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:02.149 [2024-04-17 08:31:35.278467] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:02.718 08:31:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:35:02.718 08:31:35 -- common/autotest_common.sh@852 -- # return 0 00:35:02.718 08:31:35 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:35:02.718 08:31:35 -- common/autotest_common.sh@718 -- # xtrace_disable 00:35:02.718 08:31:35 -- common/autotest_common.sh@10 -- # set +x 00:35:02.718 08:31:35 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:02.718 08:31:35 -- host/multipath.sh@33 -- # nvmfapp_pid=72351 00:35:02.718 08:31:35 -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:35:02.977 [2024-04-17 08:31:36.143210] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:02.977 08:31:36 -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:35:03.236 Malloc0 00:35:03.236 08:31:36 -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:35:03.236 08:31:36 -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:03.495 08:31:36 -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:03.755 [2024-04-17 08:31:36.891829] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:03.755 08:31:36 -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:35:03.755 [2024-04-17 08:31:37.064133] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:35:03.755 08:31:37 -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:35:03.755 08:31:37 -- host/multipath.sh@44 -- # bdevperf_pid=72397 00:35:03.755 08:31:37 -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:35:03.755 08:31:37 -- host/multipath.sh@47 -- # waitforlisten 72397 /var/tmp/bdevperf.sock 00:35:03.755 08:31:37 -- common/autotest_common.sh@819 -- # '[' -z 72397 ']' 00:35:03.755 08:31:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:03.755 08:31:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:35:03.755 08:31:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:03.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:03.755 08:31:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:35:03.755 08:31:37 -- common/autotest_common.sh@10 -- # set +x 00:35:04.693 08:31:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:35:04.693 08:31:37 -- common/autotest_common.sh@852 -- # return 0 00:35:04.693 08:31:37 -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:35:04.951 08:31:38 -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:35:05.210 Nvme0n1 00:35:05.210 08:31:38 -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:35:05.469 Nvme0n1 00:35:05.469 08:31:38 -- host/multipath.sh@78 -- # sleep 1 00:35:05.469 08:31:38 -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:35:06.408 08:31:39 -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:35:06.408 08:31:39 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:06.667 08:31:39 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:35:06.927 08:31:40 -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:35:06.927 08:31:40 -- host/multipath.sh@65 -- # dtrace_pid=72442 00:35:06.927 08:31:40 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 72351 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:35:06.927 08:31:40 -- host/multipath.sh@66 -- # sleep 6 00:35:13.502 08:31:46 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:35:13.502 08:31:46 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:35:13.502 08:31:46 -- host/multipath.sh@67 -- # active_port=4421 00:35:13.502 08:31:46 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:35:13.502 Attaching 4 probes... 00:35:13.502 @path[10.0.0.2, 4421]: 20194 00:35:13.502 @path[10.0.0.2, 4421]: 20489 00:35:13.502 @path[10.0.0.2, 4421]: 20489 00:35:13.502 @path[10.0.0.2, 4421]: 20462 00:35:13.502 @path[10.0.0.2, 4421]: 20376 00:35:13.502 08:31:46 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:35:13.502 08:31:46 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:35:13.502 08:31:46 -- host/multipath.sh@69 -- # sed -n 1p 00:35:13.502 08:31:46 -- host/multipath.sh@69 -- # port=4421 00:35:13.502 08:31:46 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:35:13.502 08:31:46 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:35:13.502 08:31:46 -- host/multipath.sh@72 -- # kill 72442 00:35:13.502 08:31:46 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:35:13.502 08:31:46 -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:35:13.502 08:31:46 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:13.502 08:31:46 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:35:13.502 08:31:46 -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:35:13.502 08:31:46 -- host/multipath.sh@65 -- # dtrace_pid=72555 00:35:13.502 08:31:46 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 72351 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:35:13.502 08:31:46 -- host/multipath.sh@66 -- # sleep 6 00:35:20.070 08:31:52 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:35:20.070 08:31:52 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:35:20.070 08:31:52 -- host/multipath.sh@67 -- # active_port=4420 00:35:20.070 08:31:52 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:35:20.070 Attaching 4 probes... 00:35:20.070 @path[10.0.0.2, 4420]: 20212 00:35:20.070 @path[10.0.0.2, 4420]: 20590 00:35:20.070 @path[10.0.0.2, 4420]: 20564 00:35:20.070 @path[10.0.0.2, 4420]: 22031 00:35:20.070 @path[10.0.0.2, 4420]: 23207 00:35:20.070 08:31:52 -- host/multipath.sh@69 -- # sed -n 1p 00:35:20.070 08:31:52 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:35:20.070 08:31:52 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:35:20.070 08:31:52 -- host/multipath.sh@69 -- # port=4420 00:35:20.070 08:31:52 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:35:20.070 08:31:52 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:35:20.070 08:31:52 -- host/multipath.sh@72 -- # kill 72555 00:35:20.070 08:31:52 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:35:20.070 08:31:52 -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:35:20.070 08:31:52 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:35:20.070 08:31:53 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:35:20.070 08:31:53 -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:35:20.329 08:31:53 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 72351 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:35:20.329 08:31:53 -- host/multipath.sh@65 -- # dtrace_pid=72672 00:35:20.329 08:31:53 -- host/multipath.sh@66 -- # sleep 6 00:35:26.934 08:31:59 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:35:26.934 08:31:59 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:35:26.934 08:31:59 -- host/multipath.sh@67 -- # active_port=4421 00:35:26.934 08:31:59 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:35:26.934 Attaching 4 probes... 00:35:26.934 @path[10.0.0.2, 4421]: 16962 00:35:26.934 @path[10.0.0.2, 4421]: 22782 00:35:26.934 @path[10.0.0.2, 4421]: 23153 00:35:26.934 @path[10.0.0.2, 4421]: 21609 00:35:26.934 @path[10.0.0.2, 4421]: 20445 00:35:26.934 08:31:59 -- host/multipath.sh@69 -- # sed -n 1p 00:35:26.934 08:31:59 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:35:26.934 08:31:59 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:35:26.934 08:31:59 -- host/multipath.sh@69 -- # port=4421 00:35:26.934 08:31:59 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:35:26.934 08:31:59 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:35:26.934 08:31:59 -- host/multipath.sh@72 -- # kill 72672 00:35:26.934 08:31:59 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:35:26.934 08:31:59 -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:35:26.934 08:31:59 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:35:26.934 08:31:59 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:35:26.934 08:32:00 -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:35:26.934 08:32:00 -- host/multipath.sh@65 -- # dtrace_pid=72780 00:35:26.934 08:32:00 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 72351 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:35:26.934 08:32:00 -- host/multipath.sh@66 -- # sleep 6 00:35:33.507 08:32:06 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:35:33.507 08:32:06 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:35:33.507 08:32:06 -- host/multipath.sh@67 -- # active_port= 00:35:33.507 08:32:06 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:35:33.507 Attaching 4 probes... 00:35:33.507 00:35:33.507 00:35:33.507 00:35:33.507 00:35:33.507 00:35:33.507 08:32:06 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:35:33.507 08:32:06 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:35:33.507 08:32:06 -- host/multipath.sh@69 -- # sed -n 1p 00:35:33.507 08:32:06 -- host/multipath.sh@69 -- # port= 00:35:33.507 08:32:06 -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:35:33.507 08:32:06 -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:35:33.507 08:32:06 -- host/multipath.sh@72 -- # kill 72780 00:35:33.507 08:32:06 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:35:33.507 08:32:06 -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:35:33.507 08:32:06 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:33.507 08:32:06 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:35:33.507 08:32:06 -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:35:33.507 08:32:06 -- host/multipath.sh@65 -- # dtrace_pid=72898 00:35:33.507 08:32:06 -- host/multipath.sh@66 -- # sleep 6 00:35:33.507 08:32:06 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 72351 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:35:40.094 08:32:12 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:35:40.094 08:32:12 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:35:40.094 08:32:12 -- host/multipath.sh@67 -- # active_port=4421 00:35:40.094 08:32:12 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:35:40.094 Attaching 4 probes... 00:35:40.094 @path[10.0.0.2, 4421]: 20419 00:35:40.094 @path[10.0.0.2, 4421]: 21239 00:35:40.094 @path[10.0.0.2, 4421]: 20813 00:35:40.094 @path[10.0.0.2, 4421]: 20838 00:35:40.094 @path[10.0.0.2, 4421]: 20796 00:35:40.094 08:32:12 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:35:40.094 08:32:12 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:35:40.094 08:32:12 -- host/multipath.sh@69 -- # sed -n 1p 00:35:40.094 08:32:12 -- host/multipath.sh@69 -- # port=4421 00:35:40.094 08:32:12 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:35:40.094 08:32:12 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:35:40.094 08:32:12 -- host/multipath.sh@72 -- # kill 72898 00:35:40.094 08:32:12 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:35:40.094 08:32:12 -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:35:40.094 [2024-04-17 08:32:13.168158] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1593ac0 is same with the state(5) to be set 00:35:40.094 [2024-04-17 08:32:13.168219] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1593ac0 is same with the state(5) to be set 00:35:40.094 [2024-04-17 08:32:13.168227] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1593ac0 is same with the state(5) to be set 00:35:40.094 [2024-04-17 08:32:13.168233] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1593ac0 is same with the state(5) to be set 00:35:40.094 [2024-04-17 08:32:13.168239] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1593ac0 is same with the state(5) to be set 00:35:40.094 [2024-04-17 08:32:13.168244] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1593ac0 is same with the state(5) to be set 00:35:40.094 [2024-04-17 08:32:13.168249] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1593ac0 is same with the state(5) to be set 00:35:40.094 [2024-04-17 08:32:13.168255] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1593ac0 is same with the state(5) to be set 00:35:40.094 [2024-04-17 08:32:13.168261] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1593ac0 is same with the state(5) to be set 00:35:40.094 [2024-04-17 08:32:13.168266] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1593ac0 is same with the state(5) to be set 00:35:40.094 [2024-04-17 08:32:13.168271] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1593ac0 is same with the state(5) to be set 00:35:40.094 [2024-04-17 08:32:13.168277] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1593ac0 is same with the state(5) to be set 00:35:40.094 [2024-04-17 08:32:13.168282] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1593ac0 is same with the state(5) to be set 00:35:40.094 [2024-04-17 08:32:13.168288] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1593ac0 is same with the state(5) to be set 00:35:40.094 [2024-04-17 08:32:13.168293] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1593ac0 is same with the state(5) to be set 00:35:40.094 [2024-04-17 08:32:13.168298] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1593ac0 is same with the state(5) to be set 00:35:40.094 [2024-04-17 08:32:13.168313] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1593ac0 is same with the state(5) to be set 00:35:40.094 [2024-04-17 08:32:13.168319] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1593ac0 is same with the state(5) to be set 00:35:40.094 [2024-04-17 08:32:13.168324] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1593ac0 is same with the state(5) to be set 00:35:40.094 [2024-04-17 08:32:13.168329] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1593ac0 is same with the state(5) to be set 00:35:40.094 [2024-04-17 08:32:13.168334] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1593ac0 is same with the state(5) to be set 00:35:40.094 [2024-04-17 08:32:13.168340] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1593ac0 is same with the state(5) to be set 00:35:40.094 [2024-04-17 08:32:13.168345] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1593ac0 is same with the state(5) to be set 00:35:40.094 [2024-04-17 08:32:13.168350] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1593ac0 is same with the state(5) to be set 00:35:40.094 [2024-04-17 08:32:13.168355] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1593ac0 is same with the state(5) to be set 00:35:40.094 [2024-04-17 08:32:13.168360] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1593ac0 is same with the state(5) to be set 00:35:40.094 [2024-04-17 08:32:13.168366] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1593ac0 is same with the state(5) to be set 00:35:40.094 [2024-04-17 08:32:13.168370] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1593ac0 is same with the state(5) to be set 00:35:40.094 [2024-04-17 08:32:13.168375] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1593ac0 is same with the state(5) to be set 00:35:40.094 [2024-04-17 08:32:13.168380] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1593ac0 is same with the state(5) to be set 00:35:40.094 [2024-04-17 08:32:13.168386] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1593ac0 is same with the state(5) to be set 00:35:40.094 [2024-04-17 08:32:13.168392] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1593ac0 is same with the state(5) to be set 00:35:40.094 [2024-04-17 08:32:13.168397] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1593ac0 is same with the state(5) to be set 00:35:40.095 [2024-04-17 08:32:13.168402] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1593ac0 is same with the state(5) to be set 00:35:40.095 [2024-04-17 08:32:13.168408] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1593ac0 is same with the state(5) to be set 00:35:40.095 [2024-04-17 08:32:13.168413] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1593ac0 is same with the state(5) to be set 00:35:40.095 [2024-04-17 08:32:13.168418] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1593ac0 is same with the state(5) to be set 00:35:40.095 08:32:13 -- host/multipath.sh@101 -- # sleep 1 00:35:41.030 08:32:14 -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:35:41.030 08:32:14 -- host/multipath.sh@65 -- # dtrace_pid=73022 00:35:41.030 08:32:14 -- host/multipath.sh@66 -- # sleep 6 00:35:41.030 08:32:14 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 72351 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:35:47.596 08:32:20 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:35:47.596 08:32:20 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:35:47.596 08:32:20 -- host/multipath.sh@67 -- # active_port=4420 00:35:47.596 08:32:20 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:35:47.596 Attaching 4 probes... 00:35:47.596 @path[10.0.0.2, 4420]: 18122 00:35:47.596 @path[10.0.0.2, 4420]: 19320 00:35:47.596 @path[10.0.0.2, 4420]: 19688 00:35:47.596 @path[10.0.0.2, 4420]: 20165 00:35:47.596 @path[10.0.0.2, 4420]: 20723 00:35:47.596 08:32:20 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:35:47.596 08:32:20 -- host/multipath.sh@69 -- # sed -n 1p 00:35:47.596 08:32:20 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:35:47.596 08:32:20 -- host/multipath.sh@69 -- # port=4420 00:35:47.596 08:32:20 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:35:47.596 08:32:20 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:35:47.596 08:32:20 -- host/multipath.sh@72 -- # kill 73022 00:35:47.596 08:32:20 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:35:47.596 08:32:20 -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:35:47.596 [2024-04-17 08:32:20.669773] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:35:47.596 08:32:20 -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:35:47.596 08:32:20 -- host/multipath.sh@111 -- # sleep 6 00:35:54.159 08:32:26 -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:35:54.159 08:32:26 -- host/multipath.sh@65 -- # dtrace_pid=73198 00:35:54.159 08:32:26 -- host/multipath.sh@66 -- # sleep 6 00:35:54.159 08:32:26 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 72351 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:36:00.731 08:32:32 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:36:00.731 08:32:32 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:36:00.731 08:32:33 -- host/multipath.sh@67 -- # active_port=4421 00:36:00.731 08:32:33 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:36:00.731 Attaching 4 probes... 00:36:00.731 @path[10.0.0.2, 4421]: 21989 00:36:00.731 @path[10.0.0.2, 4421]: 21709 00:36:00.731 @path[10.0.0.2, 4421]: 20608 00:36:00.731 @path[10.0.0.2, 4421]: 20256 00:36:00.731 @path[10.0.0.2, 4421]: 20549 00:36:00.731 08:32:33 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:36:00.731 08:32:33 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:36:00.731 08:32:33 -- host/multipath.sh@69 -- # sed -n 1p 00:36:00.731 08:32:33 -- host/multipath.sh@69 -- # port=4421 00:36:00.731 08:32:33 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:36:00.731 08:32:33 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:36:00.731 08:32:33 -- host/multipath.sh@72 -- # kill 73198 00:36:00.731 08:32:33 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:36:00.731 08:32:33 -- host/multipath.sh@114 -- # killprocess 72397 00:36:00.731 08:32:33 -- common/autotest_common.sh@926 -- # '[' -z 72397 ']' 00:36:00.731 08:32:33 -- common/autotest_common.sh@930 -- # kill -0 72397 00:36:00.731 08:32:33 -- common/autotest_common.sh@931 -- # uname 00:36:00.731 08:32:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:36:00.731 08:32:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 72397 00:36:00.731 killing process with pid 72397 00:36:00.731 08:32:33 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:36:00.731 08:32:33 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:36:00.731 08:32:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 72397' 00:36:00.731 08:32:33 -- common/autotest_common.sh@945 -- # kill 72397 00:36:00.731 08:32:33 -- common/autotest_common.sh@950 -- # wait 72397 00:36:00.731 Connection closed with partial response: 00:36:00.731 00:36:00.731 00:36:00.731 08:32:33 -- host/multipath.sh@116 -- # wait 72397 00:36:00.731 08:32:33 -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:36:00.731 [2024-04-17 08:31:37.109346] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:36:00.731 [2024-04-17 08:31:37.109418] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72397 ] 00:36:00.731 [2024-04-17 08:31:37.243216] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:00.731 [2024-04-17 08:31:37.344691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:36:00.731 Running I/O for 90 seconds... 00:36:00.731 [2024-04-17 08:31:46.751228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:108768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.731 [2024-04-17 08:31:46.751292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:00.731 [2024-04-17 08:31:46.751354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:108776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.731 [2024-04-17 08:31:46.751368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:36:00.731 [2024-04-17 08:31:46.751386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:108784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.731 [2024-04-17 08:31:46.751397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:36:00.731 [2024-04-17 08:31:46.751415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:108792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.731 [2024-04-17 08:31:46.751426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:36:00.731 [2024-04-17 08:31:46.751443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:108800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.731 [2024-04-17 08:31:46.751453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:36:00.731 [2024-04-17 08:31:46.751470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:108808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.731 [2024-04-17 08:31:46.751481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:00.731 [2024-04-17 08:31:46.751498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:108816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.731 [2024-04-17 08:31:46.751508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:00.731 [2024-04-17 08:31:46.751525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:108824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.731 [2024-04-17 08:31:46.751536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:36:00.731 [2024-04-17 08:31:46.751553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:108832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.731 [2024-04-17 08:31:46.751563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:36:00.731 [2024-04-17 08:31:46.751580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:108840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.731 [2024-04-17 08:31:46.751590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:36:00.731 [2024-04-17 08:31:46.751608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:108848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.731 [2024-04-17 08:31:46.751644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:36:00.731 [2024-04-17 08:31:46.751663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:108856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.731 [2024-04-17 08:31:46.751674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:36:00.731 [2024-04-17 08:31:46.751798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:108864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.731 [2024-04-17 08:31:46.751813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:36:00.731 [2024-04-17 08:31:46.751830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:108872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.731 [2024-04-17 08:31:46.751840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:36:00.731 [2024-04-17 08:31:46.751858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:108880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.731 [2024-04-17 08:31:46.751868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:36:00.731 [2024-04-17 08:31:46.751895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:108888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.731 [2024-04-17 08:31:46.751904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:36:00.731 [2024-04-17 08:31:46.751920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:108896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.731 [2024-04-17 08:31:46.751930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:36:00.731 [2024-04-17 08:31:46.751946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:108904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.731 [2024-04-17 08:31:46.751956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:36:00.731 [2024-04-17 08:31:46.751971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:108208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.731 [2024-04-17 08:31:46.751981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:36:00.731 [2024-04-17 08:31:46.751996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:108216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.731 [2024-04-17 08:31:46.752006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:36:00.731 [2024-04-17 08:31:46.752022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:108224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.732 [2024-04-17 08:31:46.752032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:00.732 [2024-04-17 08:31:46.752048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:108232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.732 [2024-04-17 08:31:46.752058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:36:00.732 [2024-04-17 08:31:46.752075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:108240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.732 [2024-04-17 08:31:46.752085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:36:00.732 [2024-04-17 08:31:46.752108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:108256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.732 [2024-04-17 08:31:46.752117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:36:00.732 [2024-04-17 08:31:46.752133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:108288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.732 [2024-04-17 08:31:46.752143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:36:00.732 [2024-04-17 08:31:46.752159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:108296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.732 [2024-04-17 08:31:46.752175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:36:00.732 [2024-04-17 08:31:46.752191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:108912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.732 [2024-04-17 08:31:46.752200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:36:00.732 [2024-04-17 08:31:46.752216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:108920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.732 [2024-04-17 08:31:46.752226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:36:00.732 [2024-04-17 08:31:46.752259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:108928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.732 [2024-04-17 08:31:46.752270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:36:00.732 [2024-04-17 08:31:46.752287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:108936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.732 [2024-04-17 08:31:46.752297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:36:00.732 [2024-04-17 08:31:46.752314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:108944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.732 [2024-04-17 08:31:46.752325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:36:00.732 [2024-04-17 08:31:46.752352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:108952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.732 [2024-04-17 08:31:46.752363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:36:00.732 [2024-04-17 08:31:46.752381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:108960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.732 [2024-04-17 08:31:46.752393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:00.732 [2024-04-17 08:31:46.752424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:108968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.732 [2024-04-17 08:31:46.752436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:36:00.732 [2024-04-17 08:31:46.752453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:108976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.732 [2024-04-17 08:31:46.752463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:36:00.732 [2024-04-17 08:31:46.752489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:108984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.732 [2024-04-17 08:31:46.752500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:36:00.732 [2024-04-17 08:31:46.752517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:108992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.732 [2024-04-17 08:31:46.752528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:36:00.732 [2024-04-17 08:31:46.752545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:109000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.732 [2024-04-17 08:31:46.752555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:00.732 [2024-04-17 08:31:46.752572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:108304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.732 [2024-04-17 08:31:46.752583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:00.732 [2024-04-17 08:31:46.752600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:108312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.732 [2024-04-17 08:31:46.752610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:36:00.732 [2024-04-17 08:31:46.752627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:108320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.732 [2024-04-17 08:31:46.752637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:36:00.732 [2024-04-17 08:31:46.752654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:108344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.732 [2024-04-17 08:31:46.752667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:36:00.732 [2024-04-17 08:31:46.752684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:108360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.732 [2024-04-17 08:31:46.752695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:36:00.732 [2024-04-17 08:31:46.752712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:108368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.732 [2024-04-17 08:31:46.752722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:36:00.732 [2024-04-17 08:31:46.752740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:108376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.732 [2024-04-17 08:31:46.752750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:36:00.732 [2024-04-17 08:31:46.752768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:108384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.732 [2024-04-17 08:31:46.752778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:36:00.732 [2024-04-17 08:31:46.752794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:109008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.732 [2024-04-17 08:31:46.752805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:36:00.732 [2024-04-17 08:31:46.752826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:109016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.732 [2024-04-17 08:31:46.752837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:36:00.732 [2024-04-17 08:31:46.752854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:109024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.732 [2024-04-17 08:31:46.752864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:36:00.732 [2024-04-17 08:31:46.752881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:108392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.732 [2024-04-17 08:31:46.752892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:36:00.732 [2024-04-17 08:31:46.752909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:108424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.732 [2024-04-17 08:31:46.752920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:36:00.732 [2024-04-17 08:31:46.752937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:108440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.732 [2024-04-17 08:31:46.752947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:36:00.732 [2024-04-17 08:31:46.752964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:108456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.732 [2024-04-17 08:31:46.752974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:00.732 [2024-04-17 08:31:46.752991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:108480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.732 [2024-04-17 08:31:46.753001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:36:00.732 [2024-04-17 08:31:46.753018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:108488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.732 [2024-04-17 08:31:46.753028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:36:00.732 [2024-04-17 08:31:46.753045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:108496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.732 [2024-04-17 08:31:46.753056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:36:00.732 [2024-04-17 08:31:46.753073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:108504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.732 [2024-04-17 08:31:46.753083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:36:00.732 [2024-04-17 08:31:46.753100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:109032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.732 [2024-04-17 08:31:46.753112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:36:00.732 [2024-04-17 08:31:46.753129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:109040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.732 [2024-04-17 08:31:46.753139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:00.733 [2024-04-17 08:31:46.753156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:109048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.733 [2024-04-17 08:31:46.753171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:36:00.733 [2024-04-17 08:31:46.753189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:109056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.733 [2024-04-17 08:31:46.753199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:36:00.733 [2024-04-17 08:31:46.753218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:109064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.733 [2024-04-17 08:31:46.753229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:36:00.733 [2024-04-17 08:31:46.753246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:109072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.733 [2024-04-17 08:31:46.753256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:36:00.733 [2024-04-17 08:31:46.753274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:109080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.733 [2024-04-17 08:31:46.753284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:36:00.733 [2024-04-17 08:31:46.753300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:109088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.733 [2024-04-17 08:31:46.753321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:00.733 [2024-04-17 08:31:46.753338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:109096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.733 [2024-04-17 08:31:46.753349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:36:00.733 [2024-04-17 08:31:46.753366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:109104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.733 [2024-04-17 08:31:46.753377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:36:00.733 [2024-04-17 08:31:46.753394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:109112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.733 [2024-04-17 08:31:46.753405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:00.733 [2024-04-17 08:31:46.753422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:109120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.733 [2024-04-17 08:31:46.753433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:00.733 [2024-04-17 08:31:46.753450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:109128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.733 [2024-04-17 08:31:46.753460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:00.733 [2024-04-17 08:31:46.753478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:109136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.733 [2024-04-17 08:31:46.753492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:00.733 [2024-04-17 08:31:46.753517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:109144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.733 [2024-04-17 08:31:46.753534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:36:00.733 [2024-04-17 08:31:46.753552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:109152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.733 [2024-04-17 08:31:46.753563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:00.733 [2024-04-17 08:31:46.753580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:109160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.733 [2024-04-17 08:31:46.753592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:36:00.733 [2024-04-17 08:31:46.753610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:109168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.733 [2024-04-17 08:31:46.753621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:36:00.733 [2024-04-17 08:31:46.753638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:108536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.733 [2024-04-17 08:31:46.753648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:36:00.733 [2024-04-17 08:31:46.753665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:108552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.733 [2024-04-17 08:31:46.753676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:36:00.733 [2024-04-17 08:31:46.753694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:108568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.733 [2024-04-17 08:31:46.753704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:36:00.733 [2024-04-17 08:31:46.753721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:108576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.733 [2024-04-17 08:31:46.753732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:36:00.733 [2024-04-17 08:31:46.753749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:108584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.733 [2024-04-17 08:31:46.753759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:36:00.733 [2024-04-17 08:31:46.753777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:108600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.733 [2024-04-17 08:31:46.753787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:00.733 [2024-04-17 08:31:46.753805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:108616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.733 [2024-04-17 08:31:46.753815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:36:00.733 [2024-04-17 08:31:46.753833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:108632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.733 [2024-04-17 08:31:46.753843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:00.733 [2024-04-17 08:31:46.753860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:109176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.733 [2024-04-17 08:31:46.753870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:36:00.733 [2024-04-17 08:31:46.753892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:109184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.733 [2024-04-17 08:31:46.753903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:00.733 [2024-04-17 08:31:46.753920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:109192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.733 [2024-04-17 08:31:46.753931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:36:00.733 [2024-04-17 08:31:46.753948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:109200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.733 [2024-04-17 08:31:46.753958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:36:00.733 [2024-04-17 08:31:46.753989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.733 [2024-04-17 08:31:46.754001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:36:00.733 [2024-04-17 08:31:46.754018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:109216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.733 [2024-04-17 08:31:46.754028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:36:00.733 [2024-04-17 08:31:46.754045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:109224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.733 [2024-04-17 08:31:46.754057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:00.733 [2024-04-17 08:31:46.754075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:109232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.733 [2024-04-17 08:31:46.754084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:00.733 [2024-04-17 08:31:46.754102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:109240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.733 [2024-04-17 08:31:46.754112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:36:00.733 [2024-04-17 08:31:46.754129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:109248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.733 [2024-04-17 08:31:46.754139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:36:00.733 [2024-04-17 08:31:46.754156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:109256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.733 [2024-04-17 08:31:46.754166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:36:00.733 [2024-04-17 08:31:46.754183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:109264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.733 [2024-04-17 08:31:46.754193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:00.733 [2024-04-17 08:31:46.754210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:109272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.733 [2024-04-17 08:31:46.754221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:00.733 [2024-04-17 08:31:46.754242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:108640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.733 [2024-04-17 08:31:46.754256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:00.733 [2024-04-17 08:31:46.754273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:108648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.733 [2024-04-17 08:31:46.754283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:00.734 [2024-04-17 08:31:46.754300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:108656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.734 [2024-04-17 08:31:46.754321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:36:00.734 [2024-04-17 08:31:46.754338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:108672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.734 [2024-04-17 08:31:46.754349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:00.734 [2024-04-17 08:31:46.754366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:108696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.734 [2024-04-17 08:31:46.754377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.734 [2024-04-17 08:31:46.754394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:108736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.734 [2024-04-17 08:31:46.754405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:00.734 [2024-04-17 08:31:46.754422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:108752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.734 [2024-04-17 08:31:46.754432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:00.734 [2024-04-17 08:31:46.756271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:108760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.734 [2024-04-17 08:31:46.756303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:36:00.734 [2024-04-17 08:31:46.756326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:109280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.734 [2024-04-17 08:31:46.756349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:36:00.734 [2024-04-17 08:31:46.756367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:109288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.734 [2024-04-17 08:31:46.756379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:36:00.734 [2024-04-17 08:31:46.756396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:109296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.734 [2024-04-17 08:31:46.756407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:36:00.734 [2024-04-17 08:31:46.756423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:109304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.734 [2024-04-17 08:31:46.756433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:36:00.734 [2024-04-17 08:31:46.756461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:109312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.734 [2024-04-17 08:31:46.756472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:36:00.734 [2024-04-17 08:31:46.756489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:109320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.734 [2024-04-17 08:31:46.756499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:36:00.734 [2024-04-17 08:31:46.756516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:109328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.734 [2024-04-17 08:31:46.756527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:36:00.734 [2024-04-17 08:31:46.756544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:109336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.734 [2024-04-17 08:31:46.756554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:36:00.734 [2024-04-17 08:31:46.756571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:109344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.734 [2024-04-17 08:31:46.756584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:36:00.734 [2024-04-17 08:31:46.756602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:109352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.734 [2024-04-17 08:31:46.756612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:36:00.734 [2024-04-17 08:31:46.756628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:109360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.734 [2024-04-17 08:31:46.756639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:36:00.734 [2024-04-17 08:31:46.756657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:109368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.734 [2024-04-17 08:31:46.756667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:36:00.734 [2024-04-17 08:31:46.756684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:109376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.734 [2024-04-17 08:31:46.756694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:00.734 [2024-04-17 08:31:46.756712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:109384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.734 [2024-04-17 08:31:46.756722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:36:00.734 [2024-04-17 08:31:46.756739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.734 [2024-04-17 08:31:46.756750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:36:00.734 [2024-04-17 08:31:46.756766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:109400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.734 [2024-04-17 08:31:46.756777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:36:00.734 [2024-04-17 08:31:46.756794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:109408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.734 [2024-04-17 08:31:46.756809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:36:00.734 [2024-04-17 08:31:46.756826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:109416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.734 [2024-04-17 08:31:46.756836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:36:00.734 [2024-04-17 08:31:46.756854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:109424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.734 [2024-04-17 08:31:46.756865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:36:00.734 [2024-04-17 08:31:46.756892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:109432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.734 [2024-04-17 08:31:46.756903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:36:00.734 [2024-04-17 08:31:46.756921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:109440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.734 [2024-04-17 08:31:46.756931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:36:00.734 [2024-04-17 08:31:53.182928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:60952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.734 [2024-04-17 08:31:53.182999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:36:00.734 [2024-04-17 08:31:53.183047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:60960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.734 [2024-04-17 08:31:53.183059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:36:00.734 [2024-04-17 08:31:53.183076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:60968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.734 [2024-04-17 08:31:53.183086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:00.734 [2024-04-17 08:31:53.183102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:60976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.734 [2024-04-17 08:31:53.183111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:36:00.734 [2024-04-17 08:31:53.183127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:60984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.734 [2024-04-17 08:31:53.183137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:36:00.734 [2024-04-17 08:31:53.183152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:60992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.734 [2024-04-17 08:31:53.183162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:36:00.734 [2024-04-17 08:31:53.183177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:61000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.734 [2024-04-17 08:31:53.183187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:36:00.734 [2024-04-17 08:31:53.183203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:60424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.734 [2024-04-17 08:31:53.183228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:00.734 [2024-04-17 08:31:53.183244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:60440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.734 [2024-04-17 08:31:53.183254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:00.734 [2024-04-17 08:31:53.183270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:60448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.734 [2024-04-17 08:31:53.183279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:36:00.734 [2024-04-17 08:31:53.183295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:60456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.734 [2024-04-17 08:31:53.183314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:36:00.734 [2024-04-17 08:31:53.183331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:60480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.734 [2024-04-17 08:31:53.183340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:36:00.735 [2024-04-17 08:31:53.183356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:60488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.735 [2024-04-17 08:31:53.183366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:36:00.735 [2024-04-17 08:31:53.183382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:60504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.735 [2024-04-17 08:31:53.183391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:36:00.735 [2024-04-17 08:31:53.183407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:60512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.735 [2024-04-17 08:31:53.183416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:36:00.735 [2024-04-17 08:31:53.183432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:61008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.735 [2024-04-17 08:31:53.183442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:36:00.735 [2024-04-17 08:31:53.183458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:61016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.735 [2024-04-17 08:31:53.183468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:36:00.735 [2024-04-17 08:31:53.183484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:61024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.735 [2024-04-17 08:31:53.183495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:36:00.735 [2024-04-17 08:31:53.183511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:61032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.735 [2024-04-17 08:31:53.183521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:36:00.735 [2024-04-17 08:31:53.183536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:61040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.735 [2024-04-17 08:31:53.183546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:36:00.735 [2024-04-17 08:31:53.183569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:61048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.735 [2024-04-17 08:31:53.183579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:36:00.735 [2024-04-17 08:31:53.183595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:61056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.735 [2024-04-17 08:31:53.183605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:36:00.735 [2024-04-17 08:31:53.183620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:61064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.735 [2024-04-17 08:31:53.183630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:00.735 [2024-04-17 08:31:53.183647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:61072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.735 [2024-04-17 08:31:53.183657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:36:00.735 [2024-04-17 08:31:53.183686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:61080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.735 [2024-04-17 08:31:53.183697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:36:00.735 [2024-04-17 08:31:53.183713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:61088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.735 [2024-04-17 08:31:53.183723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:36:00.735 [2024-04-17 08:31:53.183739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:61096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.735 [2024-04-17 08:31:53.183749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:36:00.735 [2024-04-17 08:31:53.183764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:61104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.735 [2024-04-17 08:31:53.183774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:36:00.735 [2024-04-17 08:31:53.183790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:61112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.735 [2024-04-17 08:31:53.183800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:36:00.735 [2024-04-17 08:31:53.183825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:61120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.735 [2024-04-17 08:31:53.183834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:36:00.735 [2024-04-17 08:31:53.183848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:61128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.735 [2024-04-17 08:31:53.183857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:36:00.735 [2024-04-17 08:31:53.183871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:61136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.735 [2024-04-17 08:31:53.183879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:36:00.735 [2024-04-17 08:31:53.183898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:61144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.735 [2024-04-17 08:31:53.183907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:36:00.735 [2024-04-17 08:31:53.183921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:61152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.735 [2024-04-17 08:31:53.183930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:36:00.735 [2024-04-17 08:31:53.183944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:61160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.735 [2024-04-17 08:31:53.183952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:00.735 [2024-04-17 08:31:53.183966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:61168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.735 [2024-04-17 08:31:53.183975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:36:00.735 [2024-04-17 08:31:53.183988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:61176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.735 [2024-04-17 08:31:53.183997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:36:00.735 [2024-04-17 08:31:53.184011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:61184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.735 [2024-04-17 08:31:53.184019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:36:00.735 [2024-04-17 08:31:53.184034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:61192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.735 [2024-04-17 08:31:53.184043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:36:00.735 [2024-04-17 08:31:53.184057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:61200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.735 [2024-04-17 08:31:53.184066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:00.735 [2024-04-17 08:31:53.184300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:61208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.735 [2024-04-17 08:31:53.184327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:00.735 [2024-04-17 08:31:53.184347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:61216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.735 [2024-04-17 08:31:53.184356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:36:00.735 [2024-04-17 08:31:53.184373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:61224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.735 [2024-04-17 08:31:53.184381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:36:00.736 [2024-04-17 08:31:53.184399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:61232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.736 [2024-04-17 08:31:53.184407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:36:00.736 [2024-04-17 08:31:53.184424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:60520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.736 [2024-04-17 08:31:53.184441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:36:00.736 [2024-04-17 08:31:53.184458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:60528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.736 [2024-04-17 08:31:53.184467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:36:00.736 [2024-04-17 08:31:53.184484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:60544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.736 [2024-04-17 08:31:53.184493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:36:00.736 [2024-04-17 08:31:53.184509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:60568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.736 [2024-04-17 08:31:53.184518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:36:00.736 [2024-04-17 08:31:53.184534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:60584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.736 [2024-04-17 08:31:53.184544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:36:00.736 [2024-04-17 08:31:53.184561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:60600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.736 [2024-04-17 08:31:53.184569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:36:00.736 [2024-04-17 08:31:53.184586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:60616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.736 [2024-04-17 08:31:53.184595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:36:00.736 [2024-04-17 08:31:53.184612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:60624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.736 [2024-04-17 08:31:53.184620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:36:00.736 [2024-04-17 08:31:53.184636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:61240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.736 [2024-04-17 08:31:53.184645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:36:00.736 [2024-04-17 08:31:53.184661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:61248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.736 [2024-04-17 08:31:53.184670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:36:00.736 [2024-04-17 08:31:53.184687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:61256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.736 [2024-04-17 08:31:53.184695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:00.736 [2024-04-17 08:31:53.184712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:61264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.736 [2024-04-17 08:31:53.184721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:36:00.736 [2024-04-17 08:31:53.184738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:61272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.736 [2024-04-17 08:31:53.184750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:36:00.736 [2024-04-17 08:31:53.184767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:61280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.736 [2024-04-17 08:31:53.184776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:36:00.736 [2024-04-17 08:31:53.184793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:60632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.736 [2024-04-17 08:31:53.184802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:36:00.736 [2024-04-17 08:31:53.184818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:60640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.736 [2024-04-17 08:31:53.184827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:36:00.736 [2024-04-17 08:31:53.184843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:60648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.736 [2024-04-17 08:31:53.184852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:00.736 [2024-04-17 08:31:53.184868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:60680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.736 [2024-04-17 08:31:53.184877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:36:00.736 [2024-04-17 08:31:53.184893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:60688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.736 [2024-04-17 08:31:53.184902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:36:00.736 [2024-04-17 08:31:53.184918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:60712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.736 [2024-04-17 08:31:53.184927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:36:00.736 [2024-04-17 08:31:53.184943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:60720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.736 [2024-04-17 08:31:53.184954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:36:00.736 [2024-04-17 08:31:53.184972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:60728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.736 [2024-04-17 08:31:53.184981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:36:00.736 [2024-04-17 08:31:53.184997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:61288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.736 [2024-04-17 08:31:53.185006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:00.736 [2024-04-17 08:31:53.185022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:61296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.736 [2024-04-17 08:31:53.185031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:36:00.736 [2024-04-17 08:31:53.185048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:61304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.736 [2024-04-17 08:31:53.185056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:36:00.736 [2024-04-17 08:31:53.185077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:61312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.736 [2024-04-17 08:31:53.185086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:00.736 [2024-04-17 08:31:53.185102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:61320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.736 [2024-04-17 08:31:53.185111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:00.736 [2024-04-17 08:31:53.185128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:61328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.736 [2024-04-17 08:31:53.185136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:00.736 [2024-04-17 08:31:53.185153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:61336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.736 [2024-04-17 08:31:53.185162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:00.736 [2024-04-17 08:31:53.185178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:61344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.736 [2024-04-17 08:31:53.185187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:36:00.736 [2024-04-17 08:31:53.185205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:61352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.736 [2024-04-17 08:31:53.185214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:00.736 [2024-04-17 08:31:53.185233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:61360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.736 [2024-04-17 08:31:53.185241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:36:00.736 [2024-04-17 08:31:53.185258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:61368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.736 [2024-04-17 08:31:53.185266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:36:00.736 [2024-04-17 08:31:53.185283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:61376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.736 [2024-04-17 08:31:53.185291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:36:00.736 [2024-04-17 08:31:53.185316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:61384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.736 [2024-04-17 08:31:53.185326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:36:00.736 [2024-04-17 08:31:53.185342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:61392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.736 [2024-04-17 08:31:53.185351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:36:00.736 [2024-04-17 08:31:53.185367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:61400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.737 [2024-04-17 08:31:53.185377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:36:00.737 [2024-04-17 08:31:53.185507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:61408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.737 [2024-04-17 08:31:53.185518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:36:00.737 [2024-04-17 08:31:53.185535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:61416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.737 [2024-04-17 08:31:53.185544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:00.737 [2024-04-17 08:31:53.185561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:61424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.737 [2024-04-17 08:31:53.185569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:36:00.737 [2024-04-17 08:31:53.185586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:61432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.737 [2024-04-17 08:31:53.185595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:00.737 [2024-04-17 08:31:53.185611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:61440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.737 [2024-04-17 08:31:53.185620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:36:00.737 [2024-04-17 08:31:53.185637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:61448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.737 [2024-04-17 08:31:53.185645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:00.737 [2024-04-17 08:31:53.185661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:60736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.737 [2024-04-17 08:31:53.185670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:36:00.737 [2024-04-17 08:31:53.185687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:60744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.737 [2024-04-17 08:31:53.185696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:36:00.737 [2024-04-17 08:31:53.185712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:60752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.737 [2024-04-17 08:31:53.185721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:36:00.737 [2024-04-17 08:31:53.185740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:60760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.737 [2024-04-17 08:31:53.185748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:36:00.737 [2024-04-17 08:31:53.185765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:60768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.737 [2024-04-17 08:31:53.185774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:00.737 [2024-04-17 08:31:53.185791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:60784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.737 [2024-04-17 08:31:53.185799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:00.737 [2024-04-17 08:31:53.185824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:60800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.737 [2024-04-17 08:31:53.185833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:36:00.737 [2024-04-17 08:31:53.185850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:60808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.737 [2024-04-17 08:31:53.185858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:36:00.737 [2024-04-17 08:31:53.185875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:61456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.737 [2024-04-17 08:31:53.185883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:36:00.737 [2024-04-17 08:31:53.185900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:61464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.737 [2024-04-17 08:31:53.185909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:00.737 [2024-04-17 08:31:53.185941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:61472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.737 [2024-04-17 08:31:53.185950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:00.737 [2024-04-17 08:31:53.185967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:61480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.737 [2024-04-17 08:31:53.185976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:00.737 [2024-04-17 08:31:53.185992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:61488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.737 [2024-04-17 08:31:53.186001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:00.737 [2024-04-17 08:31:53.186017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:61496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.737 [2024-04-17 08:31:53.186026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:36:00.737 [2024-04-17 08:31:53.186042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:61504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.737 [2024-04-17 08:31:53.186051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:00.737 [2024-04-17 08:31:53.186067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:61512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.737 [2024-04-17 08:31:53.186076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.737 [2024-04-17 08:31:53.186092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:61520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.737 [2024-04-17 08:31:53.186101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:00.737 [2024-04-17 08:31:53.186117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:61528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.737 [2024-04-17 08:31:53.186126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:00.737 [2024-04-17 08:31:53.186142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:61536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.737 [2024-04-17 08:31:53.186155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:36:00.737 [2024-04-17 08:31:53.186173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:60840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.737 [2024-04-17 08:31:53.186182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:36:00.737 [2024-04-17 08:31:53.186198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:60848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.737 [2024-04-17 08:31:53.186207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:36:00.737 [2024-04-17 08:31:53.186224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:60864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.737 [2024-04-17 08:31:53.186232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:36:00.737 [2024-04-17 08:31:53.186250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:60880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.737 [2024-04-17 08:31:53.186258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:36:00.737 [2024-04-17 08:31:53.186275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:60896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.737 [2024-04-17 08:31:53.186283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:36:00.737 [2024-04-17 08:31:53.186300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:60912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.737 [2024-04-17 08:31:53.186320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:36:00.737 [2024-04-17 08:31:53.186336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:60920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.737 [2024-04-17 08:31:53.186345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:36:00.737 [2024-04-17 08:31:53.186362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:60928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.737 [2024-04-17 08:31:53.186370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:36:00.737 [2024-04-17 08:31:53.186387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:61544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.737 [2024-04-17 08:31:53.186396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:36:00.737 [2024-04-17 08:31:53.186412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:61552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.737 [2024-04-17 08:31:53.186421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:36:00.737 [2024-04-17 08:31:53.186437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:61560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.737 [2024-04-17 08:31:53.186446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:36:00.737 [2024-04-17 08:31:53.186462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:61568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.737 [2024-04-17 08:31:53.186475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:36:00.737 [2024-04-17 08:31:53.186492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:61576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.737 [2024-04-17 08:31:53.186500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:00.737 [2024-04-17 08:31:53.186516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:61584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.737 [2024-04-17 08:31:53.186525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:36:00.738 [2024-04-17 08:31:53.186542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:61592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.738 [2024-04-17 08:31:53.186551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:36:00.738 [2024-04-17 08:31:53.186567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:61600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.738 [2024-04-17 08:31:53.186575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:36:00.738 [2024-04-17 08:31:53.186593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:61608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.738 [2024-04-17 08:31:53.186602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:36:00.738 [2024-04-17 08:31:53.186618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:61616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.738 [2024-04-17 08:31:53.186636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:36:00.738 [2024-04-17 08:31:53.186653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:61624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.738 [2024-04-17 08:31:53.186662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:36:00.738 [2024-04-17 08:31:53.186678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:61632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.738 [2024-04-17 08:31:53.186687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:36:00.738 [2024-04-17 08:31:53.186703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:61640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.738 [2024-04-17 08:31:53.186712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:36:00.738 [2024-04-17 08:31:53.186728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:61648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.738 [2024-04-17 08:31:53.186737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:36:00.738 [2024-04-17 08:32:00.035192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:83984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.738 [2024-04-17 08:32:00.035260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:36:00.738 [2024-04-17 08:32:00.035323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:83992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.738 [2024-04-17 08:32:00.035337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:00.738 [2024-04-17 08:32:00.035372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:84000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.738 [2024-04-17 08:32:00.035383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:36:00.738 [2024-04-17 08:32:00.035400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:84008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.738 [2024-04-17 08:32:00.035410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:36:00.738 [2024-04-17 08:32:00.035428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:84016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.738 [2024-04-17 08:32:00.035438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:36:00.738 [2024-04-17 08:32:00.035455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:84024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.738 [2024-04-17 08:32:00.035466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:36:00.738 [2024-04-17 08:32:00.035483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:84032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.738 [2024-04-17 08:32:00.035493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:00.738 [2024-04-17 08:32:00.035510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.738 [2024-04-17 08:32:00.035520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:00.738 [2024-04-17 08:32:00.035537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:84048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.738 [2024-04-17 08:32:00.035547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:36:00.738 [2024-04-17 08:32:00.035564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:83360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.738 [2024-04-17 08:32:00.035574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:36:00.738 [2024-04-17 08:32:00.035591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:83392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.738 [2024-04-17 08:32:00.035601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:36:00.738 [2024-04-17 08:32:00.035618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:83400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.738 [2024-04-17 08:32:00.035629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:00.738 [2024-04-17 08:32:00.035647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:83416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.738 [2024-04-17 08:32:00.035656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:00.738 [2024-04-17 08:32:00.035674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:83440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.738 [2024-04-17 08:32:00.035684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:00.738 [2024-04-17 08:32:00.035707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.738 [2024-04-17 08:32:00.035718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:00.738 [2024-04-17 08:32:00.035735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:83472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.738 [2024-04-17 08:32:00.035746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:36:00.738 [2024-04-17 08:32:00.035763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:83488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.738 [2024-04-17 08:32:00.035774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:00.738 [2024-04-17 08:32:00.035791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:84056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.738 [2024-04-17 08:32:00.035802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.738 [2024-04-17 08:32:00.035819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:84064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.738 [2024-04-17 08:32:00.035829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:00.738 [2024-04-17 08:32:00.035847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:84072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.738 [2024-04-17 08:32:00.035859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:00.738 [2024-04-17 08:32:00.035877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:84080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.738 [2024-04-17 08:32:00.035887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:36:00.738 [2024-04-17 08:32:00.035904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.738 [2024-04-17 08:32:00.035915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:36:00.738 [2024-04-17 08:32:00.035944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.738 [2024-04-17 08:32:00.035957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:36:00.738 [2024-04-17 08:32:00.035974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:84104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.738 [2024-04-17 08:32:00.035984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:36:00.738 [2024-04-17 08:32:00.036001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:84112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.738 [2024-04-17 08:32:00.036012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:36:00.738 [2024-04-17 08:32:00.036029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:84120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.738 [2024-04-17 08:32:00.036040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:36:00.738 [2024-04-17 08:32:00.036057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:84128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.738 [2024-04-17 08:32:00.036074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:36:00.738 [2024-04-17 08:32:00.036091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:84136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.738 [2024-04-17 08:32:00.036101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:36:00.738 [2024-04-17 08:32:00.036119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:84144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.738 [2024-04-17 08:32:00.036130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:36:00.738 [2024-04-17 08:32:00.036147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:84152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.739 [2024-04-17 08:32:00.036157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:36:00.739 [2024-04-17 08:32:00.036174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:84160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.739 [2024-04-17 08:32:00.036184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:36:00.739 [2024-04-17 08:32:00.036202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:84168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.739 [2024-04-17 08:32:00.036213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:36:00.739 [2024-04-17 08:32:00.036230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:84176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.739 [2024-04-17 08:32:00.036240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:36:00.739 [2024-04-17 08:32:00.036257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:84184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.739 [2024-04-17 08:32:00.036268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:00.739 [2024-04-17 08:32:00.036285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:84192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.739 [2024-04-17 08:32:00.036295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:36:00.739 [2024-04-17 08:32:00.036323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:83504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.739 [2024-04-17 08:32:00.036334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:36:00.739 [2024-04-17 08:32:00.036352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:83512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.739 [2024-04-17 08:32:00.036362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:36:00.739 [2024-04-17 08:32:00.036379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:83536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.739 [2024-04-17 08:32:00.036389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:36:00.739 [2024-04-17 08:32:00.036407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:83544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.739 [2024-04-17 08:32:00.036422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:36:00.739 [2024-04-17 08:32:00.036439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:83592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.739 [2024-04-17 08:32:00.036450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:36:00.739 [2024-04-17 08:32:00.036468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:83600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.739 [2024-04-17 08:32:00.036478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:36:00.739 [2024-04-17 08:32:00.036496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:83608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.739 [2024-04-17 08:32:00.036507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:36:00.739 [2024-04-17 08:32:00.036524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:83624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.739 [2024-04-17 08:32:00.036534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:36:00.739 [2024-04-17 08:32:00.036552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:84200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.739 [2024-04-17 08:32:00.036562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:36:00.739 [2024-04-17 08:32:00.036582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:84208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.739 [2024-04-17 08:32:00.036593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:36:00.739 [2024-04-17 08:32:00.036610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:84216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.739 [2024-04-17 08:32:00.036621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:00.739 [2024-04-17 08:32:00.036639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:84224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.739 [2024-04-17 08:32:00.036650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:36:00.739 [2024-04-17 08:32:00.036667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:84232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.739 [2024-04-17 08:32:00.036677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:36:00.739 [2024-04-17 08:32:00.036695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:84240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.739 [2024-04-17 08:32:00.036705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:36:00.739 [2024-04-17 08:32:00.036722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:84248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.739 [2024-04-17 08:32:00.036733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:36:00.739 [2024-04-17 08:32:00.036750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:84256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.739 [2024-04-17 08:32:00.036760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:00.739 [2024-04-17 08:32:00.036782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:84264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.739 [2024-04-17 08:32:00.036793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:00.739 [2024-04-17 08:32:00.036810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:84272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.739 [2024-04-17 08:32:00.036821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:36:00.739 [2024-04-17 08:32:00.036837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:84280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.739 [2024-04-17 08:32:00.036848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:36:00.739 [2024-04-17 08:32:00.036865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:84288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.739 [2024-04-17 08:32:00.036876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:36:00.739 [2024-04-17 08:32:00.036893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:84296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.739 [2024-04-17 08:32:00.036904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:36:00.739 [2024-04-17 08:32:00.036929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:84304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.739 [2024-04-17 08:32:00.036941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:36:00.739 [2024-04-17 08:32:00.036958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:84312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.739 [2024-04-17 08:32:00.036968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:36:00.739 [2024-04-17 08:32:00.036986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:84320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.739 [2024-04-17 08:32:00.036996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:36:00.739 [2024-04-17 08:32:00.037013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:84328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.739 [2024-04-17 08:32:00.037024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:36:00.739 [2024-04-17 08:32:00.037041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:84336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.739 [2024-04-17 08:32:00.037052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:36:00.739 [2024-04-17 08:32:00.037069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:84344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.739 [2024-04-17 08:32:00.037080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:36:00.739 [2024-04-17 08:32:00.037097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:84352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.739 [2024-04-17 08:32:00.037107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:36:00.739 [2024-04-17 08:32:00.037130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:83632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.739 [2024-04-17 08:32:00.037140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:36:00.739 [2024-04-17 08:32:00.037157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:83664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.739 [2024-04-17 08:32:00.037168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:36:00.739 [2024-04-17 08:32:00.037185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:83672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.739 [2024-04-17 08:32:00.037195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:00.739 [2024-04-17 08:32:00.037213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:83696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.739 [2024-04-17 08:32:00.037223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:36:00.739 [2024-04-17 08:32:00.037240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:83704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.739 [2024-04-17 08:32:00.037256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:36:00.740 [2024-04-17 08:32:00.037274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:83728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.740 [2024-04-17 08:32:00.037284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:36:00.740 [2024-04-17 08:32:00.037310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:83752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.740 [2024-04-17 08:32:00.037321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:36:00.740 [2024-04-17 08:32:00.037338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:83768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.740 [2024-04-17 08:32:00.037349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:36:00.740 [2024-04-17 08:32:00.037367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:84360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.740 [2024-04-17 08:32:00.037378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:36:00.740 [2024-04-17 08:32:00.037395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:84368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.740 [2024-04-17 08:32:00.037406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:36:00.740 [2024-04-17 08:32:00.037423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:84376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.740 [2024-04-17 08:32:00.037434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:36:00.740 [2024-04-17 08:32:00.037451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.740 [2024-04-17 08:32:00.037461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:36:00.740 [2024-04-17 08:32:00.037478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:84392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.740 [2024-04-17 08:32:00.037493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:36:00.740 [2024-04-17 08:32:00.037511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:84400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.740 [2024-04-17 08:32:00.037521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:36:00.740 [2024-04-17 08:32:00.037539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:84408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.740 [2024-04-17 08:32:00.037549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:00.740 [2024-04-17 08:32:00.037566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:84416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.740 [2024-04-17 08:32:00.037576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:36:00.740 [2024-04-17 08:32:00.037593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:84424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.740 [2024-04-17 08:32:00.037604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:36:00.740 [2024-04-17 08:32:00.037621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.740 [2024-04-17 08:32:00.037632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:36:00.740 [2024-04-17 08:32:00.037649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:84440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.740 [2024-04-17 08:32:00.037659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:36:00.740 [2024-04-17 08:32:00.038016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:84448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.740 [2024-04-17 08:32:00.038037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:00.740 [2024-04-17 08:32:00.038063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:84456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.740 [2024-04-17 08:32:00.038073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:00.740 [2024-04-17 08:32:00.038096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:84464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.740 [2024-04-17 08:32:00.038108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:36:00.740 [2024-04-17 08:32:00.038131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:84472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.740 [2024-04-17 08:32:00.038142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:36:00.740 [2024-04-17 08:32:00.038164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:84480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.740 [2024-04-17 08:32:00.038175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:36:00.740 [2024-04-17 08:32:00.038198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:84488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.740 [2024-04-17 08:32:00.038217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:36:00.740 [2024-04-17 08:32:00.038241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:83776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.740 [2024-04-17 08:32:00.038252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:36:00.740 [2024-04-17 08:32:00.038275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:83784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.740 [2024-04-17 08:32:00.038286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:36:00.740 [2024-04-17 08:32:00.038321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:83792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.740 [2024-04-17 08:32:00.038332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:36:00.740 [2024-04-17 08:32:00.038356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:83800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.740 [2024-04-17 08:32:00.038367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:36:00.740 [2024-04-17 08:32:00.038389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:83816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.740 [2024-04-17 08:32:00.038401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:36:00.740 [2024-04-17 08:32:00.038424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:83824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.740 [2024-04-17 08:32:00.038434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:36:00.740 [2024-04-17 08:32:00.038458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:83840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.740 [2024-04-17 08:32:00.038468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:36:00.740 [2024-04-17 08:32:00.038491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:83864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.740 [2024-04-17 08:32:00.038502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:36:00.740 [2024-04-17 08:32:00.038525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:84496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.740 [2024-04-17 08:32:00.038536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:36:00.740 [2024-04-17 08:32:00.038559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:84504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.740 [2024-04-17 08:32:00.038569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:00.740 [2024-04-17 08:32:00.038592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:84512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.740 [2024-04-17 08:32:00.038603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:36:00.741 [2024-04-17 08:32:00.038635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:84520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.741 [2024-04-17 08:32:00.038647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:36:00.741 [2024-04-17 08:32:00.038676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:84528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.741 [2024-04-17 08:32:00.038687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:36:00.741 [2024-04-17 08:32:00.038710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:84536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.741 [2024-04-17 08:32:00.038721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:36:00.741 [2024-04-17 08:32:00.038744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:84544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.741 [2024-04-17 08:32:00.038755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:36:00.741 [2024-04-17 08:32:00.038780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:84552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.741 [2024-04-17 08:32:00.038791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:00.741 [2024-04-17 08:32:00.038814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:84560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.741 [2024-04-17 08:32:00.038825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:36:00.741 [2024-04-17 08:32:00.038848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.741 [2024-04-17 08:32:00.038859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:36:00.741 [2024-04-17 08:32:00.038882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:83872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.741 [2024-04-17 08:32:00.038893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:36:00.741 [2024-04-17 08:32:00.038916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:83896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.741 [2024-04-17 08:32:00.038927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:36:00.741 [2024-04-17 08:32:00.038950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.741 [2024-04-17 08:32:00.038960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:36:00.741 [2024-04-17 08:32:00.038984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:83912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.741 [2024-04-17 08:32:00.038994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:00.741 [2024-04-17 08:32:00.039018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:83920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.741 [2024-04-17 08:32:00.039028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:36:00.741 [2024-04-17 08:32:00.039052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:83952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.741 [2024-04-17 08:32:00.039062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:36:00.741 [2024-04-17 08:32:00.039090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:83968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.741 [2024-04-17 08:32:00.039101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:00.741 [2024-04-17 08:32:00.039135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.741 [2024-04-17 08:32:00.039147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:00.741 [2024-04-17 08:32:00.039170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.741 [2024-04-17 08:32:00.039181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:00.741 [2024-04-17 08:32:00.039204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:84584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.741 [2024-04-17 08:32:00.039215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:00.741 [2024-04-17 08:32:00.039238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:84592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.741 [2024-04-17 08:32:00.039249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:36:00.741 [2024-04-17 08:32:00.039272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:84600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.741 [2024-04-17 08:32:00.039283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:00.741 [2024-04-17 08:32:00.039314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:84608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.741 [2024-04-17 08:32:00.039326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:36:00.741 [2024-04-17 08:32:00.039353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:84616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.741 [2024-04-17 08:32:00.039363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:36:00.741 [2024-04-17 08:32:00.039387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.741 [2024-04-17 08:32:00.039397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:36:00.741 [2024-04-17 08:32:00.039421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:84632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.741 [2024-04-17 08:32:00.039431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:36:00.741 [2024-04-17 08:32:00.039455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:84640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.741 [2024-04-17 08:32:00.039465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:36:00.741 [2024-04-17 08:32:00.039489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:84648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.741 [2024-04-17 08:32:00.039499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:36:00.741 [2024-04-17 08:32:00.039528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:84656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.741 [2024-04-17 08:32:00.039539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:36:00.741 [2024-04-17 08:32:00.039562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:84664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.741 [2024-04-17 08:32:00.039572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:00.741 [2024-04-17 08:32:00.039595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:84672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.741 [2024-04-17 08:32:00.039606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:36:00.741 [2024-04-17 08:32:00.039629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:84680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.741 [2024-04-17 08:32:00.039640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:00.741 [2024-04-17 08:32:13.168479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:106536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.741 [2024-04-17 08:32:13.168536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.741 [2024-04-17 08:32:13.168566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:106552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.741 [2024-04-17 08:32:13.168581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.741 [2024-04-17 08:32:13.168596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:106568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.741 [2024-04-17 08:32:13.168662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.741 [2024-04-17 08:32:13.168692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:106576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.741 [2024-04-17 08:32:13.168708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.741 [2024-04-17 08:32:13.168724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:106584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.741 [2024-04-17 08:32:13.168738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.741 [2024-04-17 08:32:13.168755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:106592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.741 [2024-04-17 08:32:13.168770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.741 [2024-04-17 08:32:13.168786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:106616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.741 [2024-04-17 08:32:13.168801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.741 [2024-04-17 08:32:13.168816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:105912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.741 [2024-04-17 08:32:13.168831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.741 [2024-04-17 08:32:13.168846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:105920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.742 [2024-04-17 08:32:13.168882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.742 [2024-04-17 08:32:13.168900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:105928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.742 [2024-04-17 08:32:13.168915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.742 [2024-04-17 08:32:13.168932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:105944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.742 [2024-04-17 08:32:13.168947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.742 [2024-04-17 08:32:13.168964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:105952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.742 [2024-04-17 08:32:13.168979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.742 [2024-04-17 08:32:13.168996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:105960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.742 [2024-04-17 08:32:13.169011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.742 [2024-04-17 08:32:13.169028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:105968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.742 [2024-04-17 08:32:13.169042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.742 [2024-04-17 08:32:13.169059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:105976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.742 [2024-04-17 08:32:13.169075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.742 [2024-04-17 08:32:13.169092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:106640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.742 [2024-04-17 08:32:13.169107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.742 [2024-04-17 08:32:13.169124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:106648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.742 [2024-04-17 08:32:13.169141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.742 [2024-04-17 08:32:13.169157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:106656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.742 [2024-04-17 08:32:13.169172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.742 [2024-04-17 08:32:13.169189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:106664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.742 [2024-04-17 08:32:13.169204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.742 [2024-04-17 08:32:13.169221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:106672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.742 [2024-04-17 08:32:13.169237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.742 [2024-04-17 08:32:13.169253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:106680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.742 [2024-04-17 08:32:13.169268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.742 [2024-04-17 08:32:13.169297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:106688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.742 [2024-04-17 08:32:13.169327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.742 [2024-04-17 08:32:13.169344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:106696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.742 [2024-04-17 08:32:13.169360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.742 [2024-04-17 08:32:13.169377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:106704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.742 [2024-04-17 08:32:13.169392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.742 [2024-04-17 08:32:13.169410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:106712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.742 [2024-04-17 08:32:13.169426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.742 [2024-04-17 08:32:13.169444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:106720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.742 [2024-04-17 08:32:13.169460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.742 [2024-04-17 08:32:13.169476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:106728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.742 [2024-04-17 08:32:13.169492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.742 [2024-04-17 08:32:13.169509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:106032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.742 [2024-04-17 08:32:13.169525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.742 [2024-04-17 08:32:13.169542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:106056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.742 [2024-04-17 08:32:13.169558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.742 [2024-04-17 08:32:13.169575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:106072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.742 [2024-04-17 08:32:13.169590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.742 [2024-04-17 08:32:13.169607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:106080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.742 [2024-04-17 08:32:13.169622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.742 [2024-04-17 08:32:13.169639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:106088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.742 [2024-04-17 08:32:13.169654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.742 [2024-04-17 08:32:13.169672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:106112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.742 [2024-04-17 08:32:13.169688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.742 [2024-04-17 08:32:13.169706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:106120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.742 [2024-04-17 08:32:13.169722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.742 [2024-04-17 08:32:13.169748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:106128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.742 [2024-04-17 08:32:13.169764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.742 [2024-04-17 08:32:13.169783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:106736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.742 [2024-04-17 08:32:13.169799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.742 [2024-04-17 08:32:13.169816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:106744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.742 [2024-04-17 08:32:13.169832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.742 [2024-04-17 08:32:13.169849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:106752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.742 [2024-04-17 08:32:13.169865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.742 [2024-04-17 08:32:13.169881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:106760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.742 [2024-04-17 08:32:13.169897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.742 [2024-04-17 08:32:13.169914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:106768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.742 [2024-04-17 08:32:13.169930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.742 [2024-04-17 08:32:13.169947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:106776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.742 [2024-04-17 08:32:13.169962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.742 [2024-04-17 08:32:13.169978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:106784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.742 [2024-04-17 08:32:13.169994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.742 [2024-04-17 08:32:13.170011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:106792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.742 [2024-04-17 08:32:13.170026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.742 [2024-04-17 08:32:13.170044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:106800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.742 [2024-04-17 08:32:13.170059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.742 [2024-04-17 08:32:13.170077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:106808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.742 [2024-04-17 08:32:13.170092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.742 [2024-04-17 08:32:13.170110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.742 [2024-04-17 08:32:13.170125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.742 [2024-04-17 08:32:13.170143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:106824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.742 [2024-04-17 08:32:13.170165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.742 [2024-04-17 08:32:13.170183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:106832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.742 [2024-04-17 08:32:13.170198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.743 [2024-04-17 08:32:13.170216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:106840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.743 [2024-04-17 08:32:13.170232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.743 [2024-04-17 08:32:13.170250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:106848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.743 [2024-04-17 08:32:13.170265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.743 [2024-04-17 08:32:13.170281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:106856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.743 [2024-04-17 08:32:13.170295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.743 [2024-04-17 08:32:13.170323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:106160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.743 [2024-04-17 08:32:13.170338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.743 [2024-04-17 08:32:13.170354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:106192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.743 [2024-04-17 08:32:13.170369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.743 [2024-04-17 08:32:13.170386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:106208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.743 [2024-04-17 08:32:13.170401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.743 [2024-04-17 08:32:13.170418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:106216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.743 [2024-04-17 08:32:13.170432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.743 [2024-04-17 08:32:13.170449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:106232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.743 [2024-04-17 08:32:13.170464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.743 [2024-04-17 08:32:13.170481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:106240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.743 [2024-04-17 08:32:13.170495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.743 [2024-04-17 08:32:13.170512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:106248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.743 [2024-04-17 08:32:13.170527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.743 [2024-04-17 08:32:13.170545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:106256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.743 [2024-04-17 08:32:13.170559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.743 [2024-04-17 08:32:13.170587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:106864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.743 [2024-04-17 08:32:13.170603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.743 [2024-04-17 08:32:13.170621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:106872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.743 [2024-04-17 08:32:13.170646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.743 [2024-04-17 08:32:13.170665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:106880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.743 [2024-04-17 08:32:13.170681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.743 [2024-04-17 08:32:13.170697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:106888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.743 [2024-04-17 08:32:13.170712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.743 [2024-04-17 08:32:13.170730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:106896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.743 [2024-04-17 08:32:13.170745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.743 [2024-04-17 08:32:13.170767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:106904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.743 [2024-04-17 08:32:13.170783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.743 [2024-04-17 08:32:13.170800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:106912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.743 [2024-04-17 08:32:13.170817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.743 [2024-04-17 08:32:13.170834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:106920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.743 [2024-04-17 08:32:13.170851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.743 [2024-04-17 08:32:13.170868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:106928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.743 [2024-04-17 08:32:13.170884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.743 [2024-04-17 08:32:13.170901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:106936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.743 [2024-04-17 08:32:13.170916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.743 [2024-04-17 08:32:13.170935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:106944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.743 [2024-04-17 08:32:13.170951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.743 [2024-04-17 08:32:13.170969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:106952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.743 [2024-04-17 08:32:13.170984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.743 [2024-04-17 08:32:13.171002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:106960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.743 [2024-04-17 08:32:13.171026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.743 [2024-04-17 08:32:13.171060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:106264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.743 [2024-04-17 08:32:13.171077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.743 [2024-04-17 08:32:13.171094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:106288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.743 [2024-04-17 08:32:13.171112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.743 [2024-04-17 08:32:13.171130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:106296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.743 [2024-04-17 08:32:13.171146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.743 [2024-04-17 08:32:13.171164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:106328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.743 [2024-04-17 08:32:13.171181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.743 [2024-04-17 08:32:13.171200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:106336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.743 [2024-04-17 08:32:13.171216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.743 [2024-04-17 08:32:13.171234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:106368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.743 [2024-04-17 08:32:13.171251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.743 [2024-04-17 08:32:13.171269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:106392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.743 [2024-04-17 08:32:13.171286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.743 [2024-04-17 08:32:13.171304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:106400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.743 [2024-04-17 08:32:13.171332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.743 [2024-04-17 08:32:13.171354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:106968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.743 [2024-04-17 08:32:13.171370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.743 [2024-04-17 08:32:13.171391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:106976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.743 [2024-04-17 08:32:13.171408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.743 [2024-04-17 08:32:13.171425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.743 [2024-04-17 08:32:13.171442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.743 [2024-04-17 08:32:13.171460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:106992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.743 [2024-04-17 08:32:13.171477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.743 [2024-04-17 08:32:13.171503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:107000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.743 [2024-04-17 08:32:13.171521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.743 [2024-04-17 08:32:13.171537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:107008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.743 [2024-04-17 08:32:13.171554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.743 [2024-04-17 08:32:13.171572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:107016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.743 [2024-04-17 08:32:13.171589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.744 [2024-04-17 08:32:13.171607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:107024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.744 [2024-04-17 08:32:13.171623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.744 [2024-04-17 08:32:13.171641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:107032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.744 [2024-04-17 08:32:13.171658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.744 [2024-04-17 08:32:13.171677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:107040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.744 [2024-04-17 08:32:13.171694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.744 [2024-04-17 08:32:13.171711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:107048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.744 [2024-04-17 08:32:13.171728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.744 [2024-04-17 08:32:13.171746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:107056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.744 [2024-04-17 08:32:13.171763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.744 [2024-04-17 08:32:13.171781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:107064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.744 [2024-04-17 08:32:13.171797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.744 [2024-04-17 08:32:13.171815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:107072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.744 [2024-04-17 08:32:13.171832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.744 [2024-04-17 08:32:13.171860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:107080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.744 [2024-04-17 08:32:13.171876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.744 [2024-04-17 08:32:13.171894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:107088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.744 [2024-04-17 08:32:13.171909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.744 [2024-04-17 08:32:13.171928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:107096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.744 [2024-04-17 08:32:13.171951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.744 [2024-04-17 08:32:13.171971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:107104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.744 [2024-04-17 08:32:13.171987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.744 [2024-04-17 08:32:13.172004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:106408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.744 [2024-04-17 08:32:13.172019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.744 [2024-04-17 08:32:13.172036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:106416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.744 [2024-04-17 08:32:13.172052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.744 [2024-04-17 08:32:13.172069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:106424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.744 [2024-04-17 08:32:13.172085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.744 [2024-04-17 08:32:13.172102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:106432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.744 [2024-04-17 08:32:13.172117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.744 [2024-04-17 08:32:13.172135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:106448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.744 [2024-04-17 08:32:13.172150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.744 [2024-04-17 08:32:13.172167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:106472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.744 [2024-04-17 08:32:13.172183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.744 [2024-04-17 08:32:13.172201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:106504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.744 [2024-04-17 08:32:13.172216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.744 [2024-04-17 08:32:13.172233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:106512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.744 [2024-04-17 08:32:13.172249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.744 [2024-04-17 08:32:13.172266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:107112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.744 [2024-04-17 08:32:13.172281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.744 [2024-04-17 08:32:13.172298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:107120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.744 [2024-04-17 08:32:13.172327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.744 [2024-04-17 08:32:13.172346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:107128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.744 [2024-04-17 08:32:13.172362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.744 [2024-04-17 08:32:13.172386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:107136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.744 [2024-04-17 08:32:13.172402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.744 [2024-04-17 08:32:13.172419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:107144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.744 [2024-04-17 08:32:13.172434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.744 [2024-04-17 08:32:13.172452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:107152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.744 [2024-04-17 08:32:13.172467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.744 [2024-04-17 08:32:13.172485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:107160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.744 [2024-04-17 08:32:13.172501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.744 [2024-04-17 08:32:13.172521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:107168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.744 [2024-04-17 08:32:13.172537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.744 [2024-04-17 08:32:13.172554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:107176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.744 [2024-04-17 08:32:13.172569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.744 [2024-04-17 08:32:13.172587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:107184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.744 [2024-04-17 08:32:13.172601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.744 [2024-04-17 08:32:13.172619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:107192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.744 [2024-04-17 08:32:13.172634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.744 [2024-04-17 08:32:13.172651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:107200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.745 [2024-04-17 08:32:13.172667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.745 [2024-04-17 08:32:13.172684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:107208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.745 [2024-04-17 08:32:13.172700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.745 [2024-04-17 08:32:13.172717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:107216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.745 [2024-04-17 08:32:13.172732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.745 [2024-04-17 08:32:13.172749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:106520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.745 [2024-04-17 08:32:13.172765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.745 [2024-04-17 08:32:13.172782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:106528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.745 [2024-04-17 08:32:13.172797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.745 [2024-04-17 08:32:13.172821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:106544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.745 [2024-04-17 08:32:13.172838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.745 [2024-04-17 08:32:13.172856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:106560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.745 [2024-04-17 08:32:13.172872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.745 [2024-04-17 08:32:13.172890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:106600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.745 [2024-04-17 08:32:13.172905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.745 [2024-04-17 08:32:13.172923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:106608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.745 [2024-04-17 08:32:13.172939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.745 [2024-04-17 08:32:13.172956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:106624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.745 [2024-04-17 08:32:13.172972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.745 [2024-04-17 08:32:13.172989] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc8430 is same with the state(5) to be set 00:36:00.745 [2024-04-17 08:32:13.173008] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:36:00.745 [2024-04-17 08:32:13.173021] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:36:00.745 [2024-04-17 08:32:13.173033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106632 len:8 PRP1 0x0 PRP2 0x0 00:36:00.745 [2024-04-17 08:32:13.173050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.745 [2024-04-17 08:32:13.173111] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xbc8430 was disconnected and freed. reset controller. 00:36:00.745 [2024-04-17 08:32:13.173272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:36:00.745 [2024-04-17 08:32:13.173296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.745 [2024-04-17 08:32:13.173314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:36:00.745 [2024-04-17 08:32:13.173341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.745 [2024-04-17 08:32:13.173359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:36:00.745 [2024-04-17 08:32:13.173375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.745 [2024-04-17 08:32:13.173391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:36:00.745 [2024-04-17 08:32:13.173407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.745 [2024-04-17 08:32:13.173422] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9b980 is same with the state(5) to be set 00:36:00.745 [2024-04-17 08:32:13.174579] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:00.745 [2024-04-17 08:32:13.174633] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd9b980 (9): Bad file descriptor 00:36:00.745 [2024-04-17 08:32:13.175001] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.745 [2024-04-17 08:32:13.175075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.745 [2024-04-17 08:32:13.175120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.745 [2024-04-17 08:32:13.175140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9b980 with addr=10.0.0.2, port=4421 00:36:00.745 [2024-04-17 08:32:13.175158] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9b980 is same with the state(5) to be set 00:36:00.745 [2024-04-17 08:32:13.175195] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd9b980 (9): Bad file descriptor 00:36:00.745 [2024-04-17 08:32:13.175226] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:00.745 [2024-04-17 08:32:13.175244] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:00.745 [2024-04-17 08:32:13.175270] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:00.745 [2024-04-17 08:32:13.175302] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:00.745 [2024-04-17 08:32:13.175317] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:00.745 [2024-04-17 08:32:23.204366] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:36:00.745 Received shutdown signal, test time was about 54.520529 seconds 00:36:00.745 00:36:00.745 Latency(us) 00:36:00.745 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:00.745 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:36:00.745 Verification LBA range: start 0x0 length 0x4000 00:36:00.745 Nvme0n1 : 54.52 11835.74 46.23 0.00 0.00 10801.45 726.19 7033243.39 00:36:00.745 =================================================================================================================== 00:36:00.745 Total : 11835.74 46.23 0.00 0.00 10801.45 726.19 7033243.39 00:36:00.745 08:32:33 -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:00.745 08:32:33 -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:36:00.745 08:32:33 -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:36:00.745 08:32:33 -- host/multipath.sh@125 -- # nvmftestfini 00:36:00.745 08:32:33 -- nvmf/common.sh@476 -- # nvmfcleanup 00:36:00.745 08:32:33 -- nvmf/common.sh@116 -- # sync 00:36:00.745 08:32:33 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:36:00.745 08:32:33 -- nvmf/common.sh@119 -- # set +e 00:36:00.745 08:32:33 -- nvmf/common.sh@120 -- # for i in {1..20} 00:36:00.745 08:32:33 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:36:00.745 rmmod nvme_tcp 00:36:00.745 rmmod nvme_fabrics 00:36:00.745 rmmod nvme_keyring 00:36:00.745 08:32:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:36:00.745 08:32:33 -- nvmf/common.sh@123 -- # set -e 00:36:00.745 08:32:33 -- nvmf/common.sh@124 -- # return 0 00:36:00.745 08:32:33 -- nvmf/common.sh@477 -- # '[' -n 72351 ']' 00:36:00.745 08:32:33 -- nvmf/common.sh@478 -- # killprocess 72351 00:36:00.745 08:32:33 -- common/autotest_common.sh@926 -- # '[' -z 72351 ']' 00:36:00.745 08:32:33 -- common/autotest_common.sh@930 -- # kill -0 72351 00:36:00.745 08:32:33 -- common/autotest_common.sh@931 -- # uname 00:36:00.745 08:32:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:36:00.745 08:32:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 72351 00:36:00.745 killing process with pid 72351 00:36:00.745 08:32:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:36:00.745 08:32:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:36:00.745 08:32:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 72351' 00:36:00.745 08:32:33 -- common/autotest_common.sh@945 -- # kill 72351 00:36:00.745 08:32:33 -- common/autotest_common.sh@950 -- # wait 72351 00:36:01.005 08:32:34 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:36:01.005 08:32:34 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:36:01.005 08:32:34 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:36:01.005 08:32:34 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:01.005 08:32:34 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:36:01.005 08:32:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:01.005 08:32:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:01.005 08:32:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:01.005 08:32:34 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:36:01.005 00:36:01.005 real 0m59.639s 00:36:01.005 user 2m47.248s 00:36:01.005 sys 0m15.851s 00:36:01.005 08:32:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:01.005 08:32:34 -- common/autotest_common.sh@10 -- # set +x 00:36:01.005 ************************************ 00:36:01.005 END TEST nvmf_multipath 00:36:01.005 ************************************ 00:36:01.005 08:32:34 -- nvmf/nvmf.sh@116 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:36:01.005 08:32:34 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:36:01.005 08:32:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:36:01.005 08:32:34 -- common/autotest_common.sh@10 -- # set +x 00:36:01.005 ************************************ 00:36:01.005 START TEST nvmf_timeout 00:36:01.005 ************************************ 00:36:01.005 08:32:34 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:36:01.263 * Looking for test storage... 00:36:01.263 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:36:01.263 08:32:34 -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:36:01.263 08:32:34 -- nvmf/common.sh@7 -- # uname -s 00:36:01.263 08:32:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:01.263 08:32:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:01.263 08:32:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:01.263 08:32:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:01.263 08:32:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:01.263 08:32:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:01.263 08:32:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:01.263 08:32:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:01.263 08:32:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:01.263 08:32:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:01.263 08:32:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ce38300-f67f-48af-81f9-d51a7c54746d 00:36:01.263 08:32:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ce38300-f67f-48af-81f9-d51a7c54746d 00:36:01.263 08:32:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:01.263 08:32:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:01.263 08:32:34 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:36:01.263 08:32:34 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:36:01.263 08:32:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:01.263 08:32:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:01.263 08:32:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:01.263 08:32:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:01.263 08:32:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:01.263 08:32:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:01.263 08:32:34 -- paths/export.sh@5 -- # export PATH 00:36:01.264 08:32:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:01.264 08:32:34 -- nvmf/common.sh@46 -- # : 0 00:36:01.264 08:32:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:36:01.264 08:32:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:36:01.264 08:32:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:36:01.264 08:32:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:01.264 08:32:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:01.264 08:32:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:36:01.264 08:32:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:36:01.264 08:32:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:36:01.264 08:32:34 -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:01.264 08:32:34 -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:01.264 08:32:34 -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:01.264 08:32:34 -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:36:01.264 08:32:34 -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:36:01.264 08:32:34 -- host/timeout.sh@19 -- # nvmftestinit 00:36:01.264 08:32:34 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:36:01.264 08:32:34 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:01.264 08:32:34 -- nvmf/common.sh@436 -- # prepare_net_devs 00:36:01.264 08:32:34 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:36:01.264 08:32:34 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:36:01.264 08:32:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:01.264 08:32:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:01.264 08:32:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:01.264 08:32:34 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:36:01.264 08:32:34 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:36:01.264 08:32:34 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:36:01.264 08:32:34 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:36:01.264 08:32:34 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:36:01.264 08:32:34 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:36:01.264 08:32:34 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:01.264 08:32:34 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:01.264 08:32:34 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:36:01.264 08:32:34 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:36:01.264 08:32:34 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:36:01.264 08:32:34 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:36:01.264 08:32:34 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:36:01.264 08:32:34 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:01.264 08:32:34 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:36:01.264 08:32:34 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:36:01.264 08:32:34 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:36:01.264 08:32:34 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:36:01.264 08:32:34 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:36:01.264 08:32:34 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:36:01.264 Cannot find device "nvmf_tgt_br" 00:36:01.264 08:32:34 -- nvmf/common.sh@154 -- # true 00:36:01.264 08:32:34 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:36:01.264 Cannot find device "nvmf_tgt_br2" 00:36:01.264 08:32:34 -- nvmf/common.sh@155 -- # true 00:36:01.264 08:32:34 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:36:01.264 08:32:34 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:36:01.264 Cannot find device "nvmf_tgt_br" 00:36:01.264 08:32:34 -- nvmf/common.sh@157 -- # true 00:36:01.264 08:32:34 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:36:01.264 Cannot find device "nvmf_tgt_br2" 00:36:01.264 08:32:34 -- nvmf/common.sh@158 -- # true 00:36:01.264 08:32:34 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:36:01.264 08:32:34 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:36:01.264 08:32:34 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:36:01.264 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:36:01.264 08:32:34 -- nvmf/common.sh@161 -- # true 00:36:01.264 08:32:34 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:36:01.264 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:36:01.264 08:32:34 -- nvmf/common.sh@162 -- # true 00:36:01.264 08:32:34 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:36:01.264 08:32:34 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:36:01.522 08:32:34 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:36:01.523 08:32:34 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:36:01.523 08:32:34 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:36:01.523 08:32:34 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:36:01.523 08:32:34 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:36:01.523 08:32:34 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:36:01.523 08:32:34 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:36:01.523 08:32:34 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:36:01.523 08:32:34 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:36:01.523 08:32:34 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:36:01.523 08:32:34 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:36:01.523 08:32:34 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:36:01.523 08:32:34 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:36:01.523 08:32:34 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:36:01.523 08:32:34 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:36:01.523 08:32:34 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:36:01.523 08:32:34 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:36:01.523 08:32:34 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:36:01.523 08:32:34 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:36:01.523 08:32:34 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:36:01.523 08:32:34 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:36:01.523 08:32:34 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:36:01.523 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:01.523 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.094 ms 00:36:01.523 00:36:01.523 --- 10.0.0.2 ping statistics --- 00:36:01.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:01.523 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:36:01.523 08:32:34 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:36:01.523 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:36:01.523 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.079 ms 00:36:01.523 00:36:01.523 --- 10.0.0.3 ping statistics --- 00:36:01.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:01.523 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:36:01.523 08:32:34 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:36:01.523 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:01.523 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:36:01.523 00:36:01.523 --- 10.0.0.1 ping statistics --- 00:36:01.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:01.523 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:36:01.523 08:32:34 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:01.523 08:32:34 -- nvmf/common.sh@421 -- # return 0 00:36:01.523 08:32:34 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:36:01.523 08:32:34 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:01.523 08:32:34 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:36:01.523 08:32:34 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:36:01.523 08:32:34 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:01.523 08:32:34 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:36:01.523 08:32:34 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:36:01.523 08:32:34 -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:36:01.523 08:32:34 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:36:01.523 08:32:34 -- common/autotest_common.sh@712 -- # xtrace_disable 00:36:01.523 08:32:34 -- common/autotest_common.sh@10 -- # set +x 00:36:01.523 08:32:34 -- nvmf/common.sh@469 -- # nvmfpid=73505 00:36:01.523 08:32:34 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:36:01.523 08:32:34 -- nvmf/common.sh@470 -- # waitforlisten 73505 00:36:01.523 08:32:34 -- common/autotest_common.sh@819 -- # '[' -z 73505 ']' 00:36:01.523 08:32:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:01.523 08:32:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:36:01.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:01.523 08:32:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:01.523 08:32:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:36:01.523 08:32:34 -- common/autotest_common.sh@10 -- # set +x 00:36:01.523 [2024-04-17 08:32:34.829273] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:36:01.523 [2024-04-17 08:32:34.829355] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:01.780 [2024-04-17 08:32:34.966500] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:36:01.780 [2024-04-17 08:32:35.072131] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:36:01.780 [2024-04-17 08:32:35.072272] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:01.780 [2024-04-17 08:32:35.072280] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:01.780 [2024-04-17 08:32:35.072285] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:01.780 [2024-04-17 08:32:35.072448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:01.780 [2024-04-17 08:32:35.072450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:02.713 08:32:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:36:02.713 08:32:35 -- common/autotest_common.sh@852 -- # return 0 00:36:02.713 08:32:35 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:36:02.713 08:32:35 -- common/autotest_common.sh@718 -- # xtrace_disable 00:36:02.713 08:32:35 -- common/autotest_common.sh@10 -- # set +x 00:36:02.713 08:32:35 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:02.713 08:32:35 -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:02.713 08:32:35 -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:36:02.971 [2024-04-17 08:32:36.062832] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:02.971 08:32:36 -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:36:03.228 Malloc0 00:36:03.228 08:32:36 -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:03.487 08:32:36 -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:03.745 08:32:36 -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:04.004 [2024-04-17 08:32:37.078144] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:04.004 08:32:37 -- host/timeout.sh@32 -- # bdevperf_pid=73554 00:36:04.004 08:32:37 -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:36:04.004 08:32:37 -- host/timeout.sh@34 -- # waitforlisten 73554 /var/tmp/bdevperf.sock 00:36:04.004 08:32:37 -- common/autotest_common.sh@819 -- # '[' -z 73554 ']' 00:36:04.004 08:32:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:36:04.004 08:32:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:36:04.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:36:04.004 08:32:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:36:04.004 08:32:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:36:04.004 08:32:37 -- common/autotest_common.sh@10 -- # set +x 00:36:04.004 [2024-04-17 08:32:37.155572] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:36:04.004 [2024-04-17 08:32:37.155659] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73554 ] 00:36:04.004 [2024-04-17 08:32:37.294392] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:04.262 [2024-04-17 08:32:37.398970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:36:04.829 08:32:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:36:04.829 08:32:38 -- common/autotest_common.sh@852 -- # return 0 00:36:04.829 08:32:38 -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:36:05.089 08:32:38 -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:36:05.347 NVMe0n1 00:36:05.347 08:32:38 -- host/timeout.sh@51 -- # rpc_pid=73578 00:36:05.347 08:32:38 -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:36:05.347 08:32:38 -- host/timeout.sh@53 -- # sleep 1 00:36:05.347 Running I/O for 10 seconds... 00:36:06.287 08:32:39 -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:06.550 [2024-04-17 08:32:39.768288] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249ad20 is same with the state(5) to be set 00:36:06.550 [2024-04-17 08:32:39.768349] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249ad20 is same with the state(5) to be set 00:36:06.550 [2024-04-17 08:32:39.768356] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249ad20 is same with the state(5) to be set 00:36:06.550 [2024-04-17 08:32:39.768362] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249ad20 is same with the state(5) to be set 00:36:06.550 [2024-04-17 08:32:39.768368] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249ad20 is same with the state(5) to be set 00:36:06.550 [2024-04-17 08:32:39.768374] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249ad20 is same with the state(5) to be set 00:36:06.550 [2024-04-17 08:32:39.768380] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249ad20 is same with the state(5) to be set 00:36:06.550 [2024-04-17 08:32:39.768386] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249ad20 is same with the state(5) to be set 00:36:06.551 [2024-04-17 08:32:39.768391] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249ad20 is same with the state(5) to be set 00:36:06.551 [2024-04-17 08:32:39.768398] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249ad20 is same with the state(5) to be set 00:36:06.551 [2024-04-17 08:32:39.768451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:122712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.551 [2024-04-17 08:32:39.768481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.551 [2024-04-17 08:32:39.768499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:122744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.551 [2024-04-17 08:32:39.768507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.551 [2024-04-17 08:32:39.768516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:122776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.551 [2024-04-17 08:32:39.768523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.551 [2024-04-17 08:32:39.768532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:122784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.551 [2024-04-17 08:32:39.768538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.551 [2024-04-17 08:32:39.768547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:122792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.551 [2024-04-17 08:32:39.768554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.551 [2024-04-17 08:32:39.768563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:123304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.551 [2024-04-17 08:32:39.768570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.551 [2024-04-17 08:32:39.768579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:123320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.551 [2024-04-17 08:32:39.768585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.551 [2024-04-17 08:32:39.768593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:123328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.551 [2024-04-17 08:32:39.768600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.551 [2024-04-17 08:32:39.768610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:123344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.551 [2024-04-17 08:32:39.768616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.551 [2024-04-17 08:32:39.768625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:123416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.551 [2024-04-17 08:32:39.768631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.551 [2024-04-17 08:32:39.768640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:123432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.551 [2024-04-17 08:32:39.768646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.551 [2024-04-17 08:32:39.768655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:123440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:06.551 [2024-04-17 08:32:39.768661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.551 [2024-04-17 08:32:39.768670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:123448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.551 [2024-04-17 08:32:39.768676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.551 [2024-04-17 08:32:39.768684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:122800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.551 [2024-04-17 08:32:39.768690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.551 [2024-04-17 08:32:39.768699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:122808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.551 [2024-04-17 08:32:39.768705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.551 [2024-04-17 08:32:39.768713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:122832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.551 [2024-04-17 08:32:39.768720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.551 [2024-04-17 08:32:39.768729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:122848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.551 [2024-04-17 08:32:39.768736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.551 [2024-04-17 08:32:39.768745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:122864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.551 [2024-04-17 08:32:39.768751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.551 [2024-04-17 08:32:39.768759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:122880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.551 [2024-04-17 08:32:39.768766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.551 [2024-04-17 08:32:39.768785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:122928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.551 [2024-04-17 08:32:39.768791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.551 [2024-04-17 08:32:39.768799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:122936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.551 [2024-04-17 08:32:39.768805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.551 [2024-04-17 08:32:39.768813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:123456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:06.551 [2024-04-17 08:32:39.768819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.551 [2024-04-17 08:32:39.768827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:123464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:06.551 [2024-04-17 08:32:39.768833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.551 [2024-04-17 08:32:39.768841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:123472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.551 [2024-04-17 08:32:39.768847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.551 [2024-04-17 08:32:39.768855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:123480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.551 [2024-04-17 08:32:39.768860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.551 [2024-04-17 08:32:39.768868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:123488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:06.551 [2024-04-17 08:32:39.768874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.551 [2024-04-17 08:32:39.768881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:123496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:06.551 [2024-04-17 08:32:39.768887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.551 [2024-04-17 08:32:39.768895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:123504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:06.551 [2024-04-17 08:32:39.768901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.551 [2024-04-17 08:32:39.768908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:123512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.551 [2024-04-17 08:32:39.768914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.551 [2024-04-17 08:32:39.768921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:123520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:06.551 [2024-04-17 08:32:39.768927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.551 [2024-04-17 08:32:39.768936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:123528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:06.551 [2024-04-17 08:32:39.768942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.551 [2024-04-17 08:32:39.768950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:123536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.551 [2024-04-17 08:32:39.768956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.551 [2024-04-17 08:32:39.768963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:123544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:06.551 [2024-04-17 08:32:39.768970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.551 [2024-04-17 08:32:39.768978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:123552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.551 [2024-04-17 08:32:39.768984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.551 [2024-04-17 08:32:39.768992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:06.551 [2024-04-17 08:32:39.768998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.551 [2024-04-17 08:32:39.769007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:123568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.551 [2024-04-17 08:32:39.769013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.551 [2024-04-17 08:32:39.769021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:123576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.551 [2024-04-17 08:32:39.769026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.551 [2024-04-17 08:32:39.769034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:123584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:06.551 [2024-04-17 08:32:39.769040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.552 [2024-04-17 08:32:39.769047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:123592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.552 [2024-04-17 08:32:39.769053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.552 [2024-04-17 08:32:39.769060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:123600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:06.552 [2024-04-17 08:32:39.769066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.552 [2024-04-17 08:32:39.769074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:122944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.552 [2024-04-17 08:32:39.769080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.552 [2024-04-17 08:32:39.769087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:122976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.552 [2024-04-17 08:32:39.769093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.552 [2024-04-17 08:32:39.769101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:122992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.552 [2024-04-17 08:32:39.769107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.552 [2024-04-17 08:32:39.769115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:123000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.552 [2024-04-17 08:32:39.769121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.552 [2024-04-17 08:32:39.769129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:123016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.552 [2024-04-17 08:32:39.769135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.552 [2024-04-17 08:32:39.769143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:123024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.552 [2024-04-17 08:32:39.769151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.552 [2024-04-17 08:32:39.769159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:123040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.552 [2024-04-17 08:32:39.769165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.552 [2024-04-17 08:32:39.769173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:123048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.552 [2024-04-17 08:32:39.769179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.552 [2024-04-17 08:32:39.769187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:123608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:06.552 [2024-04-17 08:32:39.769193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.552 [2024-04-17 08:32:39.769201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:123616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:06.552 [2024-04-17 08:32:39.769207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.552 [2024-04-17 08:32:39.769214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:123624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:06.552 [2024-04-17 08:32:39.769220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.552 [2024-04-17 08:32:39.769228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:123632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.552 [2024-04-17 08:32:39.769233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.552 [2024-04-17 08:32:39.769241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:123640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.552 [2024-04-17 08:32:39.769247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.552 [2024-04-17 08:32:39.769255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:123648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.552 [2024-04-17 08:32:39.769261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.552 [2024-04-17 08:32:39.769268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:123656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:06.552 [2024-04-17 08:32:39.769274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.552 [2024-04-17 08:32:39.769282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:123664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:06.552 [2024-04-17 08:32:39.769288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.552 [2024-04-17 08:32:39.769295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:123672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.552 [2024-04-17 08:32:39.769301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.552 [2024-04-17 08:32:39.769308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:123680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.552 [2024-04-17 08:32:39.769323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.552 [2024-04-17 08:32:39.769350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:123688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:06.552 [2024-04-17 08:32:39.769357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.552 [2024-04-17 08:32:39.769365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:123696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.552 [2024-04-17 08:32:39.769372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.552 [2024-04-17 08:32:39.769380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:123704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:06.552 [2024-04-17 08:32:39.769387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.552 [2024-04-17 08:32:39.769395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:123712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:06.552 [2024-04-17 08:32:39.769402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.552 [2024-04-17 08:32:39.769410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:123720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.552 [2024-04-17 08:32:39.769417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.552 [2024-04-17 08:32:39.769425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:123728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.552 [2024-04-17 08:32:39.769431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.552 [2024-04-17 08:32:39.769439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:123736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:06.552 [2024-04-17 08:32:39.769446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.552 [2024-04-17 08:32:39.769454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:123744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.552 [2024-04-17 08:32:39.769460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.552 [2024-04-17 08:32:39.769469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:123752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.552 [2024-04-17 08:32:39.769475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.552 [2024-04-17 08:32:39.769483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:123080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.552 [2024-04-17 08:32:39.769490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.552 [2024-04-17 08:32:39.769497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:123088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.552 [2024-04-17 08:32:39.769504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.552 [2024-04-17 08:32:39.769512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:123104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.552 [2024-04-17 08:32:39.769518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.552 [2024-04-17 08:32:39.769526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:123144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.552 [2024-04-17 08:32:39.769533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.552 [2024-04-17 08:32:39.769542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:123152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.552 [2024-04-17 08:32:39.769548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.552 [2024-04-17 08:32:39.769556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:123160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.552 [2024-04-17 08:32:39.769563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.552 [2024-04-17 08:32:39.769571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:123168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.552 [2024-04-17 08:32:39.769577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.552 [2024-04-17 08:32:39.769585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:123176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.552 [2024-04-17 08:32:39.769591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.552 [2024-04-17 08:32:39.769599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:123760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:06.552 [2024-04-17 08:32:39.769606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.552 [2024-04-17 08:32:39.769614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:123768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:06.552 [2024-04-17 08:32:39.769620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.553 [2024-04-17 08:32:39.769629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:123776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.553 [2024-04-17 08:32:39.769635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.553 [2024-04-17 08:32:39.769643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:123784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.553 [2024-04-17 08:32:39.769649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.553 [2024-04-17 08:32:39.769657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:123792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.553 [2024-04-17 08:32:39.769664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.553 [2024-04-17 08:32:39.769671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:123800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.553 [2024-04-17 08:32:39.769678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.553 [2024-04-17 08:32:39.769686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:123808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.553 [2024-04-17 08:32:39.769692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.553 [2024-04-17 08:32:39.769704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:123816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:06.553 [2024-04-17 08:32:39.769711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.553 [2024-04-17 08:32:39.769721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:123824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:06.553 [2024-04-17 08:32:39.769727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.553 [2024-04-17 08:32:39.769736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:123832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:06.553 [2024-04-17 08:32:39.769743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.553 [2024-04-17 08:32:39.769751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:123840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:06.553 [2024-04-17 08:32:39.769757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.553 [2024-04-17 08:32:39.769766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:123848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.553 [2024-04-17 08:32:39.769772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.553 [2024-04-17 08:32:39.769780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:123856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:06.553 [2024-04-17 08:32:39.769786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.553 [2024-04-17 08:32:39.769794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:123864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:06.553 [2024-04-17 08:32:39.769800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.553 [2024-04-17 08:32:39.769808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:123872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.553 [2024-04-17 08:32:39.769814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.553 [2024-04-17 08:32:39.769823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:123880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:06.553 [2024-04-17 08:32:39.769829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.553 [2024-04-17 08:32:39.769838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:123200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.553 [2024-04-17 08:32:39.769844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.553 [2024-04-17 08:32:39.769852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:123248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.553 [2024-04-17 08:32:39.769859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.553 [2024-04-17 08:32:39.769867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:123272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.553 [2024-04-17 08:32:39.769873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.553 [2024-04-17 08:32:39.769881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:123280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.553 [2024-04-17 08:32:39.769887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.553 [2024-04-17 08:32:39.769895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:123288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.553 [2024-04-17 08:32:39.769902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.553 [2024-04-17 08:32:39.769910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:123312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.553 [2024-04-17 08:32:39.769918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.553 [2024-04-17 08:32:39.769926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:123336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.553 [2024-04-17 08:32:39.769932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.553 [2024-04-17 08:32:39.769942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:123352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.553 [2024-04-17 08:32:39.769949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.553 [2024-04-17 08:32:39.769957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:123888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.553 [2024-04-17 08:32:39.769964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.553 [2024-04-17 08:32:39.769972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:123896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:06.553 [2024-04-17 08:32:39.769978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.553 [2024-04-17 08:32:39.769986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:123904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:06.553 [2024-04-17 08:32:39.769992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.553 [2024-04-17 08:32:39.770000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:123912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.553 [2024-04-17 08:32:39.770006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.553 [2024-04-17 08:32:39.770014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:123920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:06.553 [2024-04-17 08:32:39.770020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.553 [2024-04-17 08:32:39.770028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:123928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:06.553 [2024-04-17 08:32:39.770034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.553 [2024-04-17 08:32:39.770043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:123936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:06.553 [2024-04-17 08:32:39.770049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.553 [2024-04-17 08:32:39.770057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:123944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:06.553 [2024-04-17 08:32:39.770063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.553 [2024-04-17 08:32:39.770071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:123952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.553 [2024-04-17 08:32:39.770078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.553 [2024-04-17 08:32:39.770086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:123960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.553 [2024-04-17 08:32:39.770092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.553 [2024-04-17 08:32:39.770100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:123968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:06.553 [2024-04-17 08:32:39.770106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.553 [2024-04-17 08:32:39.770114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:123976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.553 [2024-04-17 08:32:39.770121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.553 [2024-04-17 08:32:39.770129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:123984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.553 [2024-04-17 08:32:39.770135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.553 [2024-04-17 08:32:39.770143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:123992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:06.553 [2024-04-17 08:32:39.770151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.553 [2024-04-17 08:32:39.770159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:124000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.553 [2024-04-17 08:32:39.770165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.553 [2024-04-17 08:32:39.770175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:124008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.553 [2024-04-17 08:32:39.770181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.553 [2024-04-17 08:32:39.770189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:124016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:06.553 [2024-04-17 08:32:39.770196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.554 [2024-04-17 08:32:39.770204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:124024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.554 [2024-04-17 08:32:39.770210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.554 [2024-04-17 08:32:39.770218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:124032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.554 [2024-04-17 08:32:39.770224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.554 [2024-04-17 08:32:39.770232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:124040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:06.554 [2024-04-17 08:32:39.770239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.554 [2024-04-17 08:32:39.770247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:124048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:06.554 [2024-04-17 08:32:39.770254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.554 [2024-04-17 08:32:39.770262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:123360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.554 [2024-04-17 08:32:39.770269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.554 [2024-04-17 08:32:39.770278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:123368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.554 [2024-04-17 08:32:39.770285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.554 [2024-04-17 08:32:39.770293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:123376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.554 [2024-04-17 08:32:39.770299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.554 [2024-04-17 08:32:39.770306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:123384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.554 [2024-04-17 08:32:39.770313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.554 [2024-04-17 08:32:39.770328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:123392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.554 [2024-04-17 08:32:39.770336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.554 [2024-04-17 08:32:39.770344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:123400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.554 [2024-04-17 08:32:39.770351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.554 [2024-04-17 08:32:39.770359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:123408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.554 [2024-04-17 08:32:39.770365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.554 [2024-04-17 08:32:39.770373] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ed200 is same with the state(5) to be set 00:36:06.554 [2024-04-17 08:32:39.770381] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:36:06.554 [2024-04-17 08:32:39.770387] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:36:06.554 [2024-04-17 08:32:39.770395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:123424 len:8 PRP1 0x0 PRP2 0x0 00:36:06.554 [2024-04-17 08:32:39.770401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.554 [2024-04-17 08:32:39.770445] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6ed200 was disconnected and freed. reset controller. 00:36:06.554 [2024-04-17 08:32:39.770512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:36:06.554 [2024-04-17 08:32:39.770522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.554 [2024-04-17 08:32:39.770530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:36:06.554 [2024-04-17 08:32:39.770536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.554 [2024-04-17 08:32:39.770543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:36:06.554 [2024-04-17 08:32:39.770549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.554 [2024-04-17 08:32:39.770556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:36:06.554 [2024-04-17 08:32:39.770564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.554 [2024-04-17 08:32:39.770570] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6aa000 is same with the state(5) to be set 00:36:06.554 [2024-04-17 08:32:39.770782] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:06.554 [2024-04-17 08:32:39.770800] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6aa000 (9): Bad file descriptor 00:36:06.554 [2024-04-17 08:32:39.770876] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.554 [2024-04-17 08:32:39.770921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.554 [2024-04-17 08:32:39.770947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.554 [2024-04-17 08:32:39.770957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6aa000 with addr=10.0.0.2, port=4420 00:36:06.554 [2024-04-17 08:32:39.770965] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6aa000 is same with the state(5) to be set 00:36:06.554 [2024-04-17 08:32:39.770979] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6aa000 (9): Bad file descriptor 00:36:06.554 [2024-04-17 08:32:39.770991] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:06.554 [2024-04-17 08:32:39.770998] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:06.554 [2024-04-17 08:32:39.771007] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:06.554 08:32:39 -- host/timeout.sh@56 -- # sleep 2 00:36:06.554 [2024-04-17 08:32:39.789366] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:06.554 [2024-04-17 08:32:39.789420] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:08.501 [2024-04-17 08:32:41.785730] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.501 [2024-04-17 08:32:41.785820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.501 [2024-04-17 08:32:41.785848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.501 [2024-04-17 08:32:41.785859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6aa000 with addr=10.0.0.2, port=4420 00:36:08.501 [2024-04-17 08:32:41.785871] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6aa000 is same with the state(5) to be set 00:36:08.501 [2024-04-17 08:32:41.785893] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6aa000 (9): Bad file descriptor 00:36:08.501 [2024-04-17 08:32:41.785909] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:08.501 [2024-04-17 08:32:41.785917] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:08.501 [2024-04-17 08:32:41.785926] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:08.501 [2024-04-17 08:32:41.785951] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:08.501 [2024-04-17 08:32:41.785959] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:08.501 08:32:41 -- host/timeout.sh@57 -- # get_controller 00:36:08.501 08:32:41 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:36:08.501 08:32:41 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:36:08.760 08:32:42 -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:36:08.760 08:32:42 -- host/timeout.sh@58 -- # get_bdev 00:36:08.760 08:32:42 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:36:08.760 08:32:42 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:36:09.018 08:32:42 -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:36:09.018 08:32:42 -- host/timeout.sh@61 -- # sleep 5 00:36:10.973 [2024-04-17 08:32:43.782234] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.973 [2024-04-17 08:32:43.782322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.973 [2024-04-17 08:32:43.782351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.973 [2024-04-17 08:32:43.782361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6aa000 with addr=10.0.0.2, port=4420 00:36:10.973 [2024-04-17 08:32:43.782371] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6aa000 is same with the state(5) to be set 00:36:10.974 [2024-04-17 08:32:43.782392] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6aa000 (9): Bad file descriptor 00:36:10.974 [2024-04-17 08:32:43.782407] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:10.974 [2024-04-17 08:32:43.782414] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:10.974 [2024-04-17 08:32:43.782422] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:10.974 [2024-04-17 08:32:43.782446] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:10.974 [2024-04-17 08:32:43.782454] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:12.881 [2024-04-17 08:32:45.778655] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:13.447 00:36:13.447 Latency(us) 00:36:13.447 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:13.447 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:36:13.447 Verification LBA range: start 0x0 length 0x4000 00:36:13.447 NVMe0n1 : 8.10 1899.46 7.42 15.81 0.00 66907.77 2589.96 7033243.39 00:36:13.447 =================================================================================================================== 00:36:13.447 Total : 1899.46 7.42 15.81 0.00 66907.77 2589.96 7033243.39 00:36:13.447 0 00:36:14.037 08:32:47 -- host/timeout.sh@62 -- # get_controller 00:36:14.037 08:32:47 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:36:14.037 08:32:47 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:36:14.295 08:32:47 -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:36:14.295 08:32:47 -- host/timeout.sh@63 -- # get_bdev 00:36:14.295 08:32:47 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:36:14.295 08:32:47 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:36:14.553 08:32:47 -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:36:14.553 08:32:47 -- host/timeout.sh@65 -- # wait 73578 00:36:14.553 08:32:47 -- host/timeout.sh@67 -- # killprocess 73554 00:36:14.553 08:32:47 -- common/autotest_common.sh@926 -- # '[' -z 73554 ']' 00:36:14.553 08:32:47 -- common/autotest_common.sh@930 -- # kill -0 73554 00:36:14.553 08:32:47 -- common/autotest_common.sh@931 -- # uname 00:36:14.553 08:32:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:36:14.553 08:32:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 73554 00:36:14.553 08:32:47 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:36:14.553 08:32:47 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:36:14.553 killing process with pid 73554 00:36:14.553 08:32:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 73554' 00:36:14.553 08:32:47 -- common/autotest_common.sh@945 -- # kill 73554 00:36:14.553 Received shutdown signal, test time was about 9.092404 seconds 00:36:14.553 00:36:14.553 Latency(us) 00:36:14.553 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:14.553 =================================================================================================================== 00:36:14.553 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:14.553 08:32:47 -- common/autotest_common.sh@950 -- # wait 73554 00:36:14.812 08:32:47 -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:15.071 [2024-04-17 08:32:48.225708] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:15.071 08:32:48 -- host/timeout.sh@74 -- # bdevperf_pid=73699 00:36:15.071 08:32:48 -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:36:15.071 08:32:48 -- host/timeout.sh@76 -- # waitforlisten 73699 /var/tmp/bdevperf.sock 00:36:15.071 08:32:48 -- common/autotest_common.sh@819 -- # '[' -z 73699 ']' 00:36:15.071 08:32:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:36:15.071 08:32:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:36:15.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:36:15.071 08:32:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:36:15.071 08:32:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:36:15.071 08:32:48 -- common/autotest_common.sh@10 -- # set +x 00:36:15.071 [2024-04-17 08:32:48.301352] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:36:15.071 [2024-04-17 08:32:48.301438] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73699 ] 00:36:15.330 [2024-04-17 08:32:48.427412] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:15.330 [2024-04-17 08:32:48.532425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:36:16.265 08:32:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:36:16.265 08:32:49 -- common/autotest_common.sh@852 -- # return 0 00:36:16.265 08:32:49 -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:36:16.265 08:32:49 -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:36:16.523 NVMe0n1 00:36:16.523 08:32:49 -- host/timeout.sh@84 -- # rpc_pid=73723 00:36:16.523 08:32:49 -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:36:16.523 08:32:49 -- host/timeout.sh@86 -- # sleep 1 00:36:16.798 Running I/O for 10 seconds... 00:36:17.735 08:32:50 -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:17.735 [2024-04-17 08:32:50.998096] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2696be0 is same with the state(5) to be set 00:36:17.735 [2024-04-17 08:32:50.998153] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2696be0 is same with the state(5) to be set 00:36:17.735 [2024-04-17 08:32:50.998161] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2696be0 is same with the state(5) to be set 00:36:17.735 [2024-04-17 08:32:50.998183] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2696be0 is same with the state(5) to be set 00:36:17.735 [2024-04-17 08:32:50.998190] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2696be0 is same with the state(5) to be set 00:36:17.735 [2024-04-17 08:32:50.998196] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2696be0 is same with the state(5) to be set 00:36:17.735 [2024-04-17 08:32:50.998203] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2696be0 is same with the state(5) to be set 00:36:17.735 [2024-04-17 08:32:50.998209] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2696be0 is same with the state(5) to be set 00:36:17.735 [2024-04-17 08:32:50.998214] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2696be0 is same with the state(5) to be set 00:36:17.735 [2024-04-17 08:32:50.998220] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2696be0 is same with the state(5) to be set 00:36:17.735 [2024-04-17 08:32:50.998225] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2696be0 is same with the state(5) to be set 00:36:17.735 [2024-04-17 08:32:50.998231] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2696be0 is same with the state(5) to be set 00:36:17.735 [2024-04-17 08:32:50.998237] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2696be0 is same with the state(5) to be set 00:36:17.735 [2024-04-17 08:32:50.998243] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2696be0 is same with the state(5) to be set 00:36:17.735 [2024-04-17 08:32:50.998249] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2696be0 is same with the state(5) to be set 00:36:17.735 [2024-04-17 08:32:50.998254] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2696be0 is same with the state(5) to be set 00:36:17.735 [2024-04-17 08:32:50.998259] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2696be0 is same with the state(5) to be set 00:36:17.735 [2024-04-17 08:32:50.998265] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2696be0 is same with the state(5) to be set 00:36:17.735 [2024-04-17 08:32:50.998271] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2696be0 is same with the state(5) to be set 00:36:17.735 [2024-04-17 08:32:50.998276] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2696be0 is same with the state(5) to be set 00:36:17.735 [2024-04-17 08:32:50.998281] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2696be0 is same with the state(5) to be set 00:36:17.735 [2024-04-17 08:32:50.998287] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2696be0 is same with the state(5) to be set 00:36:17.735 [2024-04-17 08:32:50.998293] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2696be0 is same with the state(5) to be set 00:36:17.735 [2024-04-17 08:32:50.998298] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2696be0 is same with the state(5) to be set 00:36:17.735 [2024-04-17 08:32:50.998304] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2696be0 is same with the state(5) to be set 00:36:17.735 [2024-04-17 08:32:50.998310] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2696be0 is same with the state(5) to be set 00:36:17.735 [2024-04-17 08:32:50.998316] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2696be0 is same with the state(5) to be set 00:36:17.735 [2024-04-17 08:32:50.998333] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2696be0 is same with the state(5) to be set 00:36:17.735 [2024-04-17 08:32:50.998404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:127736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.735 [2024-04-17 08:32:50.998436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.735 [2024-04-17 08:32:50.998456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:127744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.735 [2024-04-17 08:32:50.998463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.735 [2024-04-17 08:32:50.998473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:127776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.735 [2024-04-17 08:32:50.998480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.735 [2024-04-17 08:32:50.998489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:127784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.735 [2024-04-17 08:32:50.998495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.736 [2024-04-17 08:32:50.998504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:127824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.736 [2024-04-17 08:32:50.998511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.736 [2024-04-17 08:32:50.998521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:127840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.736 [2024-04-17 08:32:50.998528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.736 [2024-04-17 08:32:50.998536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:127848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.736 [2024-04-17 08:32:50.998543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.736 [2024-04-17 08:32:50.998551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:128360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.736 [2024-04-17 08:32:50.998558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.736 [2024-04-17 08:32:50.998566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:128368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.736 [2024-04-17 08:32:50.998574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.736 [2024-04-17 08:32:50.998582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:128376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.736 [2024-04-17 08:32:50.998588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.736 [2024-04-17 08:32:50.998596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:128384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.736 [2024-04-17 08:32:50.998603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.736 [2024-04-17 08:32:50.998612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:128400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.736 [2024-04-17 08:32:50.998618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.736 [2024-04-17 08:32:50.998627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:128424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.736 [2024-04-17 08:32:50.998636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.736 [2024-04-17 08:32:50.998644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:128448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.736 [2024-04-17 08:32:50.998651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.736 [2024-04-17 08:32:50.998660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:128456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.736 [2024-04-17 08:32:50.998666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.736 [2024-04-17 08:32:50.998685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:128464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.736 [2024-04-17 08:32:50.998691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.736 [2024-04-17 08:32:50.998700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:127880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.736 [2024-04-17 08:32:50.998707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.736 [2024-04-17 08:32:50.998716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:127904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.736 [2024-04-17 08:32:50.998723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.736 [2024-04-17 08:32:50.998732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:127912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.736 [2024-04-17 08:32:50.998738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.736 [2024-04-17 08:32:50.998747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:127920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.736 [2024-04-17 08:32:50.998753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.736 [2024-04-17 08:32:50.998761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:127928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.736 [2024-04-17 08:32:50.998768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.736 [2024-04-17 08:32:50.998776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:127936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.736 [2024-04-17 08:32:50.998782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.736 [2024-04-17 08:32:50.998790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:127952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.736 [2024-04-17 08:32:50.998797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.736 [2024-04-17 08:32:50.998805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:127984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.736 [2024-04-17 08:32:50.998812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.736 [2024-04-17 08:32:50.998820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:128496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:17.736 [2024-04-17 08:32:50.998827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.736 [2024-04-17 08:32:50.998836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:128504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:17.736 [2024-04-17 08:32:50.998842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.736 [2024-04-17 08:32:50.998851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:128512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:17.736 [2024-04-17 08:32:50.998857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.736 [2024-04-17 08:32:50.998865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:128520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.736 [2024-04-17 08:32:50.998872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.736 [2024-04-17 08:32:50.998880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:128528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.736 [2024-04-17 08:32:50.998887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.736 [2024-04-17 08:32:50.998895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:128536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:17.736 [2024-04-17 08:32:50.998901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.736 [2024-04-17 08:32:50.998909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:128544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.736 [2024-04-17 08:32:50.998916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.736 [2024-04-17 08:32:50.998924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:128552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.736 [2024-04-17 08:32:50.998930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.736 [2024-04-17 08:32:50.998939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:128560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:17.736 [2024-04-17 08:32:50.998945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.736 [2024-04-17 08:32:50.998954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:128568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:17.736 [2024-04-17 08:32:50.998960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.736 [2024-04-17 08:32:50.998969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:128576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:17.736 [2024-04-17 08:32:50.998976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.736 [2024-04-17 08:32:50.998984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:128584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:17.736 [2024-04-17 08:32:50.998990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.736 [2024-04-17 08:32:50.998998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:128592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:17.736 [2024-04-17 08:32:50.999005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.736 [2024-04-17 08:32:50.999013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:128600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:17.736 [2024-04-17 08:32:50.999021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.736 [2024-04-17 08:32:50.999029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:128608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.736 [2024-04-17 08:32:50.999036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.736 [2024-04-17 08:32:50.999044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:128616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.736 [2024-04-17 08:32:50.999051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.736 [2024-04-17 08:32:50.999059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:128624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.736 [2024-04-17 08:32:50.999065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.736 [2024-04-17 08:32:50.999074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:128632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.736 [2024-04-17 08:32:50.999080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.736 [2024-04-17 08:32:50.999089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:128640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.736 [2024-04-17 08:32:50.999095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.736 [2024-04-17 08:32:50.999103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:128648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.736 [2024-04-17 08:32:50.999109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.737 [2024-04-17 08:32:50.999118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:127992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.737 [2024-04-17 08:32:50.999124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.737 [2024-04-17 08:32:50.999133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:128016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.737 [2024-04-17 08:32:50.999139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.737 [2024-04-17 08:32:50.999147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:128024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.737 [2024-04-17 08:32:50.999154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.737 [2024-04-17 08:32:50.999162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:128048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.737 [2024-04-17 08:32:50.999169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.737 [2024-04-17 08:32:50.999189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:128080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.737 [2024-04-17 08:32:50.999209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.737 [2024-04-17 08:32:50.999220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:128088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.737 [2024-04-17 08:32:50.999229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.737 [2024-04-17 08:32:50.999239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:128096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.737 [2024-04-17 08:32:50.999245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.737 [2024-04-17 08:32:50.999253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:128120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.737 [2024-04-17 08:32:50.999260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.737 [2024-04-17 08:32:50.999268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:128656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:17.737 [2024-04-17 08:32:50.999275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.737 [2024-04-17 08:32:50.999283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:128664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.737 [2024-04-17 08:32:50.999289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.737 [2024-04-17 08:32:50.999297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:128672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.737 [2024-04-17 08:32:50.999313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.737 [2024-04-17 08:32:50.999322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:128680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.737 [2024-04-17 08:32:50.999328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.737 [2024-04-17 08:32:50.999336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:128688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.737 [2024-04-17 08:32:50.999343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.737 [2024-04-17 08:32:50.999351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:128696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:17.737 [2024-04-17 08:32:50.999358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.737 [2024-04-17 08:32:50.999366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:128704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.737 [2024-04-17 08:32:50.999372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.737 [2024-04-17 08:32:50.999380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:128712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.737 [2024-04-17 08:32:50.999387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.737 [2024-04-17 08:32:50.999395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:128720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:17.737 [2024-04-17 08:32:50.999402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.737 [2024-04-17 08:32:50.999410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:128728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.737 [2024-04-17 08:32:50.999416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.737 [2024-04-17 08:32:50.999425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:128736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.737 [2024-04-17 08:32:50.999431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.737 [2024-04-17 08:32:50.999439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:128744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:17.737 [2024-04-17 08:32:50.999446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.737 [2024-04-17 08:32:50.999454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:128752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:17.737 [2024-04-17 08:32:50.999460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.737 [2024-04-17 08:32:50.999469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:128760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.737 [2024-04-17 08:32:50.999475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.737 [2024-04-17 08:32:50.999484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:128768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:17.737 [2024-04-17 08:32:50.999490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.737 [2024-04-17 08:32:50.999498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:128776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:17.737 [2024-04-17 08:32:50.999504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.737 [2024-04-17 08:32:50.999512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:128784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.737 [2024-04-17 08:32:50.999519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.737 [2024-04-17 08:32:50.999527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:128128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.737 [2024-04-17 08:32:50.999533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.737 [2024-04-17 08:32:50.999542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:128136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.737 [2024-04-17 08:32:50.999591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.737 [2024-04-17 08:32:50.999607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:128152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.737 [2024-04-17 08:32:50.999614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.737 [2024-04-17 08:32:50.999623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:128168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.737 [2024-04-17 08:32:50.999629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.737 [2024-04-17 08:32:50.999639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:128176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.737 [2024-04-17 08:32:50.999645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.737 [2024-04-17 08:32:50.999653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:128208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.737 [2024-04-17 08:32:50.999660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.737 [2024-04-17 08:32:50.999668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:128240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.737 [2024-04-17 08:32:50.999675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.737 [2024-04-17 08:32:50.999694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:128248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.737 [2024-04-17 08:32:50.999701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.737 [2024-04-17 08:32:50.999710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:128792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.737 [2024-04-17 08:32:50.999716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.737 [2024-04-17 08:32:50.999731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:128800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:17.737 [2024-04-17 08:32:50.999739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.737 [2024-04-17 08:32:50.999747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:128808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:17.737 [2024-04-17 08:32:50.999754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.737 [2024-04-17 08:32:50.999768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:128816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:17.737 [2024-04-17 08:32:50.999777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.737 [2024-04-17 08:32:50.999786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:128824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.737 [2024-04-17 08:32:50.999794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.737 [2024-04-17 08:32:50.999814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:128832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:17.737 [2024-04-17 08:32:50.999821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.737 [2024-04-17 08:32:50.999830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:128840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:17.737 [2024-04-17 08:32:50.999842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.738 [2024-04-17 08:32:50.999851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:128848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:17.738 [2024-04-17 08:32:50.999858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.738 [2024-04-17 08:32:50.999867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:128856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:17.738 [2024-04-17 08:32:50.999879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.738 [2024-04-17 08:32:50.999888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:128864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.738 [2024-04-17 08:32:50.999895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.738 [2024-04-17 08:32:50.999904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:128872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.738 [2024-04-17 08:32:50.999916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.738 [2024-04-17 08:32:50.999925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:128880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.738 [2024-04-17 08:32:50.999932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.738 [2024-04-17 08:32:50.999941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:128888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.738 [2024-04-17 08:32:50.999947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.738 [2024-04-17 08:32:50.999961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:128896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:17.738 [2024-04-17 08:32:50.999968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.738 [2024-04-17 08:32:50.999977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:128904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:17.738 [2024-04-17 08:32:50.999984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.738 [2024-04-17 08:32:50.999999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:128912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:17.738 [2024-04-17 08:32:51.000006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.738 [2024-04-17 08:32:51.000014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:128920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.738 [2024-04-17 08:32:51.000021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.738 [2024-04-17 08:32:51.000035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:128928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:17.738 [2024-04-17 08:32:51.000043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.738 [2024-04-17 08:32:51.000051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:128936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.738 [2024-04-17 08:32:51.000057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.738 [2024-04-17 08:32:51.000072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:128296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.738 [2024-04-17 08:32:51.000079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.738 [2024-04-17 08:32:51.000087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:128304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.738 [2024-04-17 08:32:51.000094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.738 [2024-04-17 08:32:51.000108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:128336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.738 [2024-04-17 08:32:51.000116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.738 [2024-04-17 08:32:51.000124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:128344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.738 [2024-04-17 08:32:51.000131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.738 [2024-04-17 08:32:51.000148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:128352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.738 [2024-04-17 08:32:51.000155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.738 [2024-04-17 08:32:51.000164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:128392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.738 [2024-04-17 08:32:51.000170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.738 [2024-04-17 08:32:51.000179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:128408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.738 [2024-04-17 08:32:51.000185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.738 [2024-04-17 08:32:51.000193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:128416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.738 [2024-04-17 08:32:51.000205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.738 [2024-04-17 08:32:51.000214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:128944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.738 [2024-04-17 08:32:51.000221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.738 [2024-04-17 08:32:51.000230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:128952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.738 [2024-04-17 08:32:51.000236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.738 [2024-04-17 08:32:51.000244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:128960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.738 [2024-04-17 08:32:51.000257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.738 [2024-04-17 08:32:51.000265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:128968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:17.738 [2024-04-17 08:32:51.000272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.738 [2024-04-17 08:32:51.000280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:128976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:17.738 [2024-04-17 08:32:51.000287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.738 [2024-04-17 08:32:51.000327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:128984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.738 [2024-04-17 08:32:51.000335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.738 [2024-04-17 08:32:51.000344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:128992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:17.738 [2024-04-17 08:32:51.000351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.738 [2024-04-17 08:32:51.000360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:129000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.738 [2024-04-17 08:32:51.000367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.738 [2024-04-17 08:32:51.000375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:129008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:17.738 [2024-04-17 08:32:51.000382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.738 [2024-04-17 08:32:51.000397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:129016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.738 [2024-04-17 08:32:51.000404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.738 [2024-04-17 08:32:51.000413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:129024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:17.738 [2024-04-17 08:32:51.000424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.738 [2024-04-17 08:32:51.000433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:129032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:17.738 [2024-04-17 08:32:51.000439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.738 [2024-04-17 08:32:51.000448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:129040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:17.738 [2024-04-17 08:32:51.000459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.738 [2024-04-17 08:32:51.000467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:129048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.738 [2024-04-17 08:32:51.000473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.738 [2024-04-17 08:32:51.000482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:129056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.738 [2024-04-17 08:32:51.000488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.738 [2024-04-17 08:32:51.000502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:129064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.738 [2024-04-17 08:32:51.000509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.738 [2024-04-17 08:32:51.000517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:129072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.738 [2024-04-17 08:32:51.000523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.738 [2024-04-17 08:32:51.000532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:129080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:17.738 [2024-04-17 08:32:51.000539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.738 [2024-04-17 08:32:51.000548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:129088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.738 [2024-04-17 08:32:51.000554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.738 [2024-04-17 08:32:51.000562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:128432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.738 [2024-04-17 08:32:51.000575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.739 [2024-04-17 08:32:51.000585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:128440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.739 [2024-04-17 08:32:51.000592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.739 [2024-04-17 08:32:51.000600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:128472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.739 [2024-04-17 08:32:51.000606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.739 [2024-04-17 08:32:51.000615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:128480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.739 [2024-04-17 08:32:51.000622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.739 [2024-04-17 08:32:51.000634] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c59320 is same with the state(5) to be set 00:36:17.739 [2024-04-17 08:32:51.000645] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:36:17.739 [2024-04-17 08:32:51.000650] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:36:17.739 [2024-04-17 08:32:51.000659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128488 len:8 PRP1 0x0 PRP2 0x0 00:36:17.739 [2024-04-17 08:32:51.000670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.739 [2024-04-17 08:32:51.000723] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1c59320 was disconnected and freed. reset controller. 00:36:17.739 [2024-04-17 08:32:51.000959] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:17.739 [2024-04-17 08:32:51.001030] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c16000 (9): Bad file descriptor 00:36:17.739 [2024-04-17 08:32:51.001108] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.739 [2024-04-17 08:32:51.001153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.739 [2024-04-17 08:32:51.001177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.739 [2024-04-17 08:32:51.001186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c16000 with addr=10.0.0.2, port=4420 00:36:17.739 [2024-04-17 08:32:51.001194] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c16000 is same with the state(5) to be set 00:36:17.739 [2024-04-17 08:32:51.001207] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c16000 (9): Bad file descriptor 00:36:17.739 [2024-04-17 08:32:51.001218] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:17.739 [2024-04-17 08:32:51.001225] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:17.739 [2024-04-17 08:32:51.001232] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:17.739 [2024-04-17 08:32:51.001249] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:17.739 [2024-04-17 08:32:51.001256] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:17.739 08:32:51 -- host/timeout.sh@90 -- # sleep 1 00:36:18.674 [2024-04-17 08:32:51.999452] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.674 [2024-04-17 08:32:51.999566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.674 [2024-04-17 08:32:51.999593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.674 [2024-04-17 08:32:51.999603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c16000 with addr=10.0.0.2, port=4420 00:36:18.674 [2024-04-17 08:32:51.999613] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c16000 is same with the state(5) to be set 00:36:18.674 [2024-04-17 08:32:51.999633] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c16000 (9): Bad file descriptor 00:36:18.674 [2024-04-17 08:32:51.999647] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:18.674 [2024-04-17 08:32:51.999654] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:18.674 [2024-04-17 08:32:51.999662] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:18.674 [2024-04-17 08:32:51.999684] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:18.674 [2024-04-17 08:32:51.999692] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:18.933 08:32:52 -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:18.933 [2024-04-17 08:32:52.225505] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:18.933 08:32:52 -- host/timeout.sh@92 -- # wait 73723 00:36:19.868 [2024-04-17 08:32:53.014878] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:36:27.994 00:36:27.994 Latency(us) 00:36:27.994 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:27.994 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:36:27.994 Verification LBA range: start 0x0 length 0x4000 00:36:27.994 NVMe0n1 : 10.01 10027.27 39.17 0.00 0.00 12745.81 862.13 3018433.62 00:36:27.994 =================================================================================================================== 00:36:27.994 Total : 10027.27 39.17 0.00 0.00 12745.81 862.13 3018433.62 00:36:27.994 0 00:36:27.994 08:32:59 -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:36:27.994 08:32:59 -- host/timeout.sh@97 -- # rpc_pid=73832 00:36:27.994 08:32:59 -- host/timeout.sh@98 -- # sleep 1 00:36:27.994 Running I/O for 10 seconds... 00:36:27.994 08:33:00 -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:27.994 [2024-04-17 08:33:01.074261] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26978d0 is same with the state(5) to be set 00:36:27.994 [2024-04-17 08:33:01.074335] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26978d0 is same with the state(5) to be set 00:36:27.994 [2024-04-17 08:33:01.074342] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26978d0 is same with the state(5) to be set 00:36:27.994 [2024-04-17 08:33:01.074347] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26978d0 is same with the state(5) to be set 00:36:27.994 [2024-04-17 08:33:01.074352] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26978d0 is same with the state(5) to be set 00:36:27.994 [2024-04-17 08:33:01.074359] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26978d0 is same with the state(5) to be set 00:36:27.994 [2024-04-17 08:33:01.074364] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26978d0 is same with the state(5) to be set 00:36:27.994 [2024-04-17 08:33:01.074369] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26978d0 is same with the state(5) to be set 00:36:27.994 [2024-04-17 08:33:01.074374] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26978d0 is same with the state(5) to be set 00:36:27.994 [2024-04-17 08:33:01.074379] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26978d0 is same with the state(5) to be set 00:36:27.994 [2024-04-17 08:33:01.074384] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26978d0 is same with the state(5) to be set 00:36:27.994 [2024-04-17 08:33:01.074388] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26978d0 is same with the state(5) to be set 00:36:27.994 [2024-04-17 08:33:01.074393] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26978d0 is same with the state(5) to be set 00:36:27.994 [2024-04-17 08:33:01.074398] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26978d0 is same with the state(5) to be set 00:36:27.994 [2024-04-17 08:33:01.074402] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26978d0 is same with the state(5) to be set 00:36:27.994 [2024-04-17 08:33:01.074407] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26978d0 is same with the state(5) to be set 00:36:27.994 [2024-04-17 08:33:01.074411] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26978d0 is same with the state(5) to be set 00:36:27.994 [2024-04-17 08:33:01.074416] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26978d0 is same with the state(5) to be set 00:36:27.994 [2024-04-17 08:33:01.074421] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26978d0 is same with the state(5) to be set 00:36:27.994 [2024-04-17 08:33:01.074426] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26978d0 is same with the state(5) to be set 00:36:27.994 [2024-04-17 08:33:01.074431] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26978d0 is same with the state(5) to be set 00:36:27.994 [2024-04-17 08:33:01.074435] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26978d0 is same with the state(5) to be set 00:36:27.994 [2024-04-17 08:33:01.074441] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26978d0 is same with the state(5) to be set 00:36:27.994 [2024-04-17 08:33:01.074446] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26978d0 is same with the state(5) to be set 00:36:27.994 [2024-04-17 08:33:01.074450] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26978d0 is same with the state(5) to be set 00:36:27.994 [2024-04-17 08:33:01.074455] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26978d0 is same with the state(5) to be set 00:36:27.994 [2024-04-17 08:33:01.074460] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26978d0 is same with the state(5) to be set 00:36:27.995 [2024-04-17 08:33:01.074465] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26978d0 is same with the state(5) to be set 00:36:27.995 [2024-04-17 08:33:01.074470] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26978d0 is same with the state(5) to be set 00:36:27.995 [2024-04-17 08:33:01.074475] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26978d0 is same with the state(5) to be set 00:36:27.995 [2024-04-17 08:33:01.074480] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26978d0 is same with the state(5) to be set 00:36:27.995 [2024-04-17 08:33:01.074485] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26978d0 is same with the state(5) to be set 00:36:27.995 [2024-04-17 08:33:01.074490] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26978d0 is same with the state(5) to be set 00:36:27.995 [2024-04-17 08:33:01.074557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:124976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.995 [2024-04-17 08:33:01.074632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.995 [2024-04-17 08:33:01.074653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:124984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.995 [2024-04-17 08:33:01.074661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.995 [2024-04-17 08:33:01.074680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:124992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.995 [2024-04-17 08:33:01.074686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.995 [2024-04-17 08:33:01.074701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:125000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.995 [2024-04-17 08:33:01.074708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.995 [2024-04-17 08:33:01.074730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:125016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.995 [2024-04-17 08:33:01.074737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.995 [2024-04-17 08:33:01.074745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:125024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.995 [2024-04-17 08:33:01.074751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.995 [2024-04-17 08:33:01.074759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:125032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.995 [2024-04-17 08:33:01.074780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.995 [2024-04-17 08:33:01.074799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:125040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.995 [2024-04-17 08:33:01.074806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.995 [2024-04-17 08:33:01.074813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:125056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.995 [2024-04-17 08:33:01.074820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.995 [2024-04-17 08:33:01.074828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:125064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.995 [2024-04-17 08:33:01.074834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.995 [2024-04-17 08:33:01.074843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:124400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.995 [2024-04-17 08:33:01.074859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.995 [2024-04-17 08:33:01.074867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:124424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.995 [2024-04-17 08:33:01.074878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.995 [2024-04-17 08:33:01.074887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:124432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.995 [2024-04-17 08:33:01.074893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.995 [2024-04-17 08:33:01.074901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:124448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.995 [2024-04-17 08:33:01.074913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.995 [2024-04-17 08:33:01.074921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:124464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.995 [2024-04-17 08:33:01.074927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.995 [2024-04-17 08:33:01.074946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:124472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.995 [2024-04-17 08:33:01.074957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.995 [2024-04-17 08:33:01.074965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:124480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.995 [2024-04-17 08:33:01.074973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.995 [2024-04-17 08:33:01.074981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:124504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.995 [2024-04-17 08:33:01.074987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.995 [2024-04-17 08:33:01.074995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:125104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.995 [2024-04-17 08:33:01.075001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.995 [2024-04-17 08:33:01.075014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:125112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.995 [2024-04-17 08:33:01.075021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.995 [2024-04-17 08:33:01.075031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:125120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.995 [2024-04-17 08:33:01.075037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.995 [2024-04-17 08:33:01.075045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:125128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.995 [2024-04-17 08:33:01.075076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.995 [2024-04-17 08:33:01.075084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:125136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.995 [2024-04-17 08:33:01.075091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.995 [2024-04-17 08:33:01.075099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:125144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.995 [2024-04-17 08:33:01.075105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.995 [2024-04-17 08:33:01.075113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:124512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.995 [2024-04-17 08:33:01.075124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.995 [2024-04-17 08:33:01.075133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:124528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.995 [2024-04-17 08:33:01.075139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.995 [2024-04-17 08:33:01.075151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:124536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.995 [2024-04-17 08:33:01.075158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.995 [2024-04-17 08:33:01.075166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:124552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.995 [2024-04-17 08:33:01.075172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.995 [2024-04-17 08:33:01.075181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:124576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.995 [2024-04-17 08:33:01.075187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.995 [2024-04-17 08:33:01.075194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:124584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.995 [2024-04-17 08:33:01.075213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.995 [2024-04-17 08:33:01.075221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:124608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.995 [2024-04-17 08:33:01.075228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.995 [2024-04-17 08:33:01.075236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:124632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.995 [2024-04-17 08:33:01.075242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.995 [2024-04-17 08:33:01.075261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:125152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.995 [2024-04-17 08:33:01.075276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.995 [2024-04-17 08:33:01.075286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:125160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.995 [2024-04-17 08:33:01.075293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.995 [2024-04-17 08:33:01.075301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:125168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.995 [2024-04-17 08:33:01.075322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.995 [2024-04-17 08:33:01.075330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:125176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.996 [2024-04-17 08:33:01.075336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.996 [2024-04-17 08:33:01.075345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:125184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.996 [2024-04-17 08:33:01.075351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.996 [2024-04-17 08:33:01.075359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:125192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.996 [2024-04-17 08:33:01.075365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.996 [2024-04-17 08:33:01.075386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:125200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.996 [2024-04-17 08:33:01.075393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.996 [2024-04-17 08:33:01.075401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:125208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.996 [2024-04-17 08:33:01.075418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.996 [2024-04-17 08:33:01.075427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:125216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.996 [2024-04-17 08:33:01.075434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.996 [2024-04-17 08:33:01.075447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:125224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.996 [2024-04-17 08:33:01.075454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.996 [2024-04-17 08:33:01.075473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:125232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.996 [2024-04-17 08:33:01.075480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.996 [2024-04-17 08:33:01.075488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:125240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.996 [2024-04-17 08:33:01.075495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.996 [2024-04-17 08:33:01.075503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:125248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.996 [2024-04-17 08:33:01.075509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.996 [2024-04-17 08:33:01.075522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:125256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.996 [2024-04-17 08:33:01.075528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.996 [2024-04-17 08:33:01.075536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:125264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.996 [2024-04-17 08:33:01.075542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.996 [2024-04-17 08:33:01.075550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:125272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.996 [2024-04-17 08:33:01.075557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.996 [2024-04-17 08:33:01.075576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:125280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.996 [2024-04-17 08:33:01.075584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.996 [2024-04-17 08:33:01.075592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:125288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.996 [2024-04-17 08:33:01.075605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.996 [2024-04-17 08:33:01.075613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:125296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.996 [2024-04-17 08:33:01.075620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.996 [2024-04-17 08:33:01.075628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:125304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.996 [2024-04-17 08:33:01.075645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.996 [2024-04-17 08:33:01.075653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:125312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.996 [2024-04-17 08:33:01.075659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.996 [2024-04-17 08:33:01.075667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:125320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.996 [2024-04-17 08:33:01.075673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.996 [2024-04-17 08:33:01.075693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:125328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.996 [2024-04-17 08:33:01.075700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.996 [2024-04-17 08:33:01.075708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:125336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.996 [2024-04-17 08:33:01.075727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.996 [2024-04-17 08:33:01.075735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:124640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.996 [2024-04-17 08:33:01.075742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.996 [2024-04-17 08:33:01.075761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:124648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.996 [2024-04-17 08:33:01.075768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.996 [2024-04-17 08:33:01.075776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:124656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.996 [2024-04-17 08:33:01.075783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.996 [2024-04-17 08:33:01.075791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:124664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.996 [2024-04-17 08:33:01.075797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.996 [2024-04-17 08:33:01.075817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:124672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.996 [2024-04-17 08:33:01.075823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.996 [2024-04-17 08:33:01.075831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:124680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.996 [2024-04-17 08:33:01.075837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.996 [2024-04-17 08:33:01.075845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:124688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.996 [2024-04-17 08:33:01.075856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.996 [2024-04-17 08:33:01.075865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:124696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.996 [2024-04-17 08:33:01.075871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.996 [2024-04-17 08:33:01.075879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:125344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.996 [2024-04-17 08:33:01.075886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.996 [2024-04-17 08:33:01.075894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:125352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.996 [2024-04-17 08:33:01.075911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.996 [2024-04-17 08:33:01.075920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:125360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.996 [2024-04-17 08:33:01.075926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.996 [2024-04-17 08:33:01.075935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:125368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.996 [2024-04-17 08:33:01.075941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.996 [2024-04-17 08:33:01.075949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:125376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.996 [2024-04-17 08:33:01.075966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.996 [2024-04-17 08:33:01.075974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:125384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.996 [2024-04-17 08:33:01.075981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.996 [2024-04-17 08:33:01.075989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:125392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.996 [2024-04-17 08:33:01.075995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.996 [2024-04-17 08:33:01.076008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:125400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.996 [2024-04-17 08:33:01.076014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.996 [2024-04-17 08:33:01.076022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:125408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.996 [2024-04-17 08:33:01.076028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.996 [2024-04-17 08:33:01.076035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:125416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.996 [2024-04-17 08:33:01.076042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.996 [2024-04-17 08:33:01.076059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:124720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.996 [2024-04-17 08:33:01.076066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.997 [2024-04-17 08:33:01.076074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:124728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.997 [2024-04-17 08:33:01.076080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.997 [2024-04-17 08:33:01.076088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:124744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.997 [2024-04-17 08:33:01.076094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.997 [2024-04-17 08:33:01.076115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:124776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.997 [2024-04-17 08:33:01.076123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.997 [2024-04-17 08:33:01.076130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:124792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.997 [2024-04-17 08:33:01.076136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.997 [2024-04-17 08:33:01.076145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:124816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.997 [2024-04-17 08:33:01.076156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.997 [2024-04-17 08:33:01.076164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:124824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.997 [2024-04-17 08:33:01.076171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.997 [2024-04-17 08:33:01.076179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:124840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.997 [2024-04-17 08:33:01.076185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.997 [2024-04-17 08:33:01.076198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:125424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.997 [2024-04-17 08:33:01.076205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.997 [2024-04-17 08:33:01.076213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:125432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.997 [2024-04-17 08:33:01.076219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.997 [2024-04-17 08:33:01.076227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:125440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.997 [2024-04-17 08:33:01.076232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.997 [2024-04-17 08:33:01.076240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:125448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.997 [2024-04-17 08:33:01.076258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.997 [2024-04-17 08:33:01.076267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:125456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.997 [2024-04-17 08:33:01.076273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.997 [2024-04-17 08:33:01.076296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:125464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.997 [2024-04-17 08:33:01.076312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.997 [2024-04-17 08:33:01.076330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:125472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.997 [2024-04-17 08:33:01.076337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.997 [2024-04-17 08:33:01.076345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:125480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.997 [2024-04-17 08:33:01.076351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.997 [2024-04-17 08:33:01.076368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:125488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.997 [2024-04-17 08:33:01.076375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.997 [2024-04-17 08:33:01.076382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:125496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.997 [2024-04-17 08:33:01.076388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.997 [2024-04-17 08:33:01.076396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:124864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.997 [2024-04-17 08:33:01.076406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.997 [2024-04-17 08:33:01.076414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:124888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.997 [2024-04-17 08:33:01.076420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.997 [2024-04-17 08:33:01.076434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:124936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.997 [2024-04-17 08:33:01.076441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.997 [2024-04-17 08:33:01.076448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:124944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.997 [2024-04-17 08:33:01.076454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.997 [2024-04-17 08:33:01.076462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:124952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.997 [2024-04-17 08:33:01.076469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.997 [2024-04-17 08:33:01.076482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:124960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.997 [2024-04-17 08:33:01.076488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.997 [2024-04-17 08:33:01.076496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:124968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.997 [2024-04-17 08:33:01.076506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.997 [2024-04-17 08:33:01.076514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:125008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.997 [2024-04-17 08:33:01.076520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.997 [2024-04-17 08:33:01.076528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:125504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.997 [2024-04-17 08:33:01.076534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.997 [2024-04-17 08:33:01.076548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:125512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.997 [2024-04-17 08:33:01.076555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.997 [2024-04-17 08:33:01.076562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:125520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.997 [2024-04-17 08:33:01.076568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.997 [2024-04-17 08:33:01.076576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:125528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.997 [2024-04-17 08:33:01.076583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.997 [2024-04-17 08:33:01.076591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:125536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.997 [2024-04-17 08:33:01.076597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.997 [2024-04-17 08:33:01.076604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:125544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.997 [2024-04-17 08:33:01.076615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.997 [2024-04-17 08:33:01.076623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:125552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.998 [2024-04-17 08:33:01.076641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.998 [2024-04-17 08:33:01.076649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:125560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.998 [2024-04-17 08:33:01.076656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.998 [2024-04-17 08:33:01.076664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:125568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.998 [2024-04-17 08:33:01.076671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.998 [2024-04-17 08:33:01.076689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:125576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.998 [2024-04-17 08:33:01.076699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.998 [2024-04-17 08:33:01.076719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:125584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.998 [2024-04-17 08:33:01.076726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.998 [2024-04-17 08:33:01.076734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:125592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.998 [2024-04-17 08:33:01.076752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.998 [2024-04-17 08:33:01.076761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:125600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.998 [2024-04-17 08:33:01.076767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.998 [2024-04-17 08:33:01.076775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:125608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.998 [2024-04-17 08:33:01.076781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.998 [2024-04-17 08:33:01.076793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:125616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.998 [2024-04-17 08:33:01.076800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.998 [2024-04-17 08:33:01.076808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.998 [2024-04-17 08:33:01.076814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.998 [2024-04-17 08:33:01.076823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:125632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.998 [2024-04-17 08:33:01.076830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.998 [2024-04-17 08:33:01.076844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:125640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.998 [2024-04-17 08:33:01.076850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.998 [2024-04-17 08:33:01.076858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:125648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.998 [2024-04-17 08:33:01.076864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.998 [2024-04-17 08:33:01.076884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:125656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.998 [2024-04-17 08:33:01.076891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.998 [2024-04-17 08:33:01.076899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:125664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.998 [2024-04-17 08:33:01.076906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.998 [2024-04-17 08:33:01.076914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:125672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.998 [2024-04-17 08:33:01.076924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.998 [2024-04-17 08:33:01.076932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:125680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:27.998 [2024-04-17 08:33:01.076939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.998 [2024-04-17 08:33:01.076947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:125048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.998 [2024-04-17 08:33:01.076953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.998 [2024-04-17 08:33:01.076960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:125072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.998 [2024-04-17 08:33:01.076972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.998 [2024-04-17 08:33:01.076980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:125080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.998 [2024-04-17 08:33:01.076988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.998 [2024-04-17 08:33:01.076995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:125088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.998 [2024-04-17 08:33:01.077001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.998 [2024-04-17 08:33:01.077020] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8daf0 is same with the state(5) to be set 00:36:27.998 [2024-04-17 08:33:01.077034] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:36:27.998 [2024-04-17 08:33:01.077040] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:36:27.998 [2024-04-17 08:33:01.077056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:125096 len:8 PRP1 0x0 PRP2 0x0 00:36:27.998 [2024-04-17 08:33:01.077063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.998 [2024-04-17 08:33:01.077118] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1c8daf0 was disconnected and freed. reset controller. 00:36:27.998 [2024-04-17 08:33:01.077190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:36:27.998 [2024-04-17 08:33:01.077212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.998 [2024-04-17 08:33:01.077221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:36:27.998 [2024-04-17 08:33:01.077228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.998 [2024-04-17 08:33:01.077234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:36:27.998 [2024-04-17 08:33:01.077240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.998 [2024-04-17 08:33:01.077247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:36:27.998 [2024-04-17 08:33:01.077253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:27.998 [2024-04-17 08:33:01.077259] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c16000 is same with the state(5) to be set 00:36:27.998 [2024-04-17 08:33:01.077505] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:27.998 [2024-04-17 08:33:01.077529] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c16000 (9): Bad file descriptor 00:36:27.998 [2024-04-17 08:33:01.077603] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.998 [2024-04-17 08:33:01.077649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.998 [2024-04-17 08:33:01.077674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.998 [2024-04-17 08:33:01.077686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c16000 with addr=10.0.0.2, port=4420 00:36:27.998 [2024-04-17 08:33:01.077693] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c16000 is same with the state(5) to be set 00:36:27.998 [2024-04-17 08:33:01.077706] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c16000 (9): Bad file descriptor 00:36:27.998 [2024-04-17 08:33:01.077723] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:27.998 [2024-04-17 08:33:01.077729] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:27.998 08:33:01 -- host/timeout.sh@101 -- # sleep 3 00:36:27.998 [2024-04-17 08:33:01.095400] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:27.998 [2024-04-17 08:33:01.095494] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:27.998 [2024-04-17 08:33:01.095510] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:28.937 [2024-04-17 08:33:02.093727] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.937 [2024-04-17 08:33:02.093815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.937 [2024-04-17 08:33:02.093841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.937 [2024-04-17 08:33:02.093850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c16000 with addr=10.0.0.2, port=4420 00:36:28.937 [2024-04-17 08:33:02.093860] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c16000 is same with the state(5) to be set 00:36:28.937 [2024-04-17 08:33:02.093883] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c16000 (9): Bad file descriptor 00:36:28.937 [2024-04-17 08:33:02.093898] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:28.937 [2024-04-17 08:33:02.093905] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:28.937 [2024-04-17 08:33:02.093913] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:28.937 [2024-04-17 08:33:02.093942] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:28.937 [2024-04-17 08:33:02.093950] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:29.878 [2024-04-17 08:33:03.092151] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.878 [2024-04-17 08:33:03.092246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.878 [2024-04-17 08:33:03.092274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.878 [2024-04-17 08:33:03.092284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c16000 with addr=10.0.0.2, port=4420 00:36:29.878 [2024-04-17 08:33:03.092295] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c16000 is same with the state(5) to be set 00:36:29.878 [2024-04-17 08:33:03.092325] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c16000 (9): Bad file descriptor 00:36:29.878 [2024-04-17 08:33:03.092348] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:29.878 [2024-04-17 08:33:03.092355] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:29.878 [2024-04-17 08:33:03.092363] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:29.878 [2024-04-17 08:33:03.092385] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:29.878 [2024-04-17 08:33:03.092393] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:30.816 [2024-04-17 08:33:04.090790] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.816 [2024-04-17 08:33:04.090883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.816 [2024-04-17 08:33:04.090909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.816 [2024-04-17 08:33:04.090918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c16000 with addr=10.0.0.2, port=4420 00:36:30.816 [2024-04-17 08:33:04.090928] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c16000 is same with the state(5) to be set 00:36:30.816 [2024-04-17 08:33:04.091111] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c16000 (9): Bad file descriptor 00:36:30.816 [2024-04-17 08:33:04.091204] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:30.816 [2024-04-17 08:33:04.091220] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:30.816 [2024-04-17 08:33:04.091227] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:30.816 [2024-04-17 08:33:04.093439] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:30.816 [2024-04-17 08:33:04.093462] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:30.816 08:33:04 -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:31.076 [2024-04-17 08:33:04.281236] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:31.076 08:33:04 -- host/timeout.sh@103 -- # wait 73832 00:36:32.018 [2024-04-17 08:33:05.112054] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:36:37.295 00:36:37.295 Latency(us) 00:36:37.295 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:37.295 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:36:37.295 Verification LBA range: start 0x0 length 0x4000 00:36:37.295 NVMe0n1 : 10.01 8598.45 33.59 6856.10 0.00 8269.06 327.32 3018433.62 00:36:37.295 =================================================================================================================== 00:36:37.295 Total : 8598.45 33.59 6856.10 0.00 8269.06 0.00 3018433.62 00:36:37.295 0 00:36:37.295 08:33:09 -- host/timeout.sh@105 -- # killprocess 73699 00:36:37.295 08:33:09 -- common/autotest_common.sh@926 -- # '[' -z 73699 ']' 00:36:37.295 08:33:09 -- common/autotest_common.sh@930 -- # kill -0 73699 00:36:37.295 08:33:09 -- common/autotest_common.sh@931 -- # uname 00:36:37.295 08:33:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:36:37.295 08:33:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 73699 00:36:37.295 killing process with pid 73699 00:36:37.295 Received shutdown signal, test time was about 10.000000 seconds 00:36:37.295 00:36:37.295 Latency(us) 00:36:37.295 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:37.295 =================================================================================================================== 00:36:37.295 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:37.295 08:33:10 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:36:37.295 08:33:10 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:36:37.295 08:33:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 73699' 00:36:37.295 08:33:10 -- common/autotest_common.sh@945 -- # kill 73699 00:36:37.295 08:33:10 -- common/autotest_common.sh@950 -- # wait 73699 00:36:37.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:36:37.295 08:33:10 -- host/timeout.sh@110 -- # bdevperf_pid=73942 00:36:37.295 08:33:10 -- host/timeout.sh@112 -- # waitforlisten 73942 /var/tmp/bdevperf.sock 00:36:37.295 08:33:10 -- common/autotest_common.sh@819 -- # '[' -z 73942 ']' 00:36:37.295 08:33:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:36:37.295 08:33:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:36:37.295 08:33:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:36:37.295 08:33:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:36:37.295 08:33:10 -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:36:37.295 08:33:10 -- common/autotest_common.sh@10 -- # set +x 00:36:37.295 [2024-04-17 08:33:10.291648] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:36:37.295 [2024-04-17 08:33:10.291727] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73942 ] 00:36:37.295 [2024-04-17 08:33:10.430522] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:37.295 [2024-04-17 08:33:10.534761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:36:38.258 08:33:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:36:38.258 08:33:11 -- common/autotest_common.sh@852 -- # return 0 00:36:38.258 08:33:11 -- host/timeout.sh@116 -- # dtrace_pid=73958 00:36:38.258 08:33:11 -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:36:38.258 08:33:11 -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 73942 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:36:38.258 08:33:11 -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:36:38.528 NVMe0n1 00:36:38.528 08:33:11 -- host/timeout.sh@124 -- # rpc_pid=73999 00:36:38.528 08:33:11 -- host/timeout.sh@125 -- # sleep 1 00:36:38.528 08:33:11 -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:36:38.788 Running I/O for 10 seconds... 00:36:39.728 08:33:12 -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:39.728 [2024-04-17 08:33:12.969127] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.728 [2024-04-17 08:33:12.969186] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.728 [2024-04-17 08:33:12.969195] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.728 [2024-04-17 08:33:12.969201] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.728 [2024-04-17 08:33:12.969207] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.728 [2024-04-17 08:33:12.969212] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.728 [2024-04-17 08:33:12.969218] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.728 [2024-04-17 08:33:12.969224] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.728 [2024-04-17 08:33:12.969229] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.728 [2024-04-17 08:33:12.969235] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.728 [2024-04-17 08:33:12.969240] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.728 [2024-04-17 08:33:12.969246] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.728 [2024-04-17 08:33:12.969252] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.728 [2024-04-17 08:33:12.969257] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.728 [2024-04-17 08:33:12.969263] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.728 [2024-04-17 08:33:12.969268] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.728 [2024-04-17 08:33:12.969274] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.728 [2024-04-17 08:33:12.969280] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.728 [2024-04-17 08:33:12.969285] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.728 [2024-04-17 08:33:12.969291] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.728 [2024-04-17 08:33:12.969296] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.728 [2024-04-17 08:33:12.969301] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.728 [2024-04-17 08:33:12.969319] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.728 [2024-04-17 08:33:12.969324] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.728 [2024-04-17 08:33:12.969330] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.728 [2024-04-17 08:33:12.969336] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.728 [2024-04-17 08:33:12.969341] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.728 [2024-04-17 08:33:12.969347] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.728 [2024-04-17 08:33:12.969352] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.728 [2024-04-17 08:33:12.969358] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.728 [2024-04-17 08:33:12.969364] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.728 [2024-04-17 08:33:12.969370] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.728 [2024-04-17 08:33:12.969377] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.728 [2024-04-17 08:33:12.969383] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.728 [2024-04-17 08:33:12.969389] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.728 [2024-04-17 08:33:12.969395] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.728 [2024-04-17 08:33:12.969400] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.728 [2024-04-17 08:33:12.969406] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.728 [2024-04-17 08:33:12.969412] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.728 [2024-04-17 08:33:12.969418] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.728 [2024-04-17 08:33:12.969424] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.728 [2024-04-17 08:33:12.969429] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.728 [2024-04-17 08:33:12.969435] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.728 [2024-04-17 08:33:12.969440] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.728 [2024-04-17 08:33:12.969446] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.728 [2024-04-17 08:33:12.969452] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.728 [2024-04-17 08:33:12.969457] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.728 [2024-04-17 08:33:12.969462] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.728 [2024-04-17 08:33:12.969468] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.728 [2024-04-17 08:33:12.969473] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.728 [2024-04-17 08:33:12.969478] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.728 [2024-04-17 08:33:12.969484] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.728 [2024-04-17 08:33:12.969489] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.728 [2024-04-17 08:33:12.969495] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.728 [2024-04-17 08:33:12.969500] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.728 [2024-04-17 08:33:12.969505] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.728 [2024-04-17 08:33:12.969511] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.728 [2024-04-17 08:33:12.969516] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.728 [2024-04-17 08:33:12.969521] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.728 [2024-04-17 08:33:12.969527] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.728 [2024-04-17 08:33:12.969533] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.728 [2024-04-17 08:33:12.969539] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.728 [2024-04-17 08:33:12.969544] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.728 [2024-04-17 08:33:12.969550] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.728 [2024-04-17 08:33:12.969555] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.728 [2024-04-17 08:33:12.969560] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.728 [2024-04-17 08:33:12.969566] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.728 [2024-04-17 08:33:12.969571] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.728 [2024-04-17 08:33:12.969578] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.728 [2024-04-17 08:33:12.969584] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.729 [2024-04-17 08:33:12.969590] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.729 [2024-04-17 08:33:12.969595] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.729 [2024-04-17 08:33:12.969601] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.729 [2024-04-17 08:33:12.969606] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.729 [2024-04-17 08:33:12.969612] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.729 [2024-04-17 08:33:12.969617] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.729 [2024-04-17 08:33:12.969623] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.729 [2024-04-17 08:33:12.969628] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.729 [2024-04-17 08:33:12.969634] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.729 [2024-04-17 08:33:12.969639] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.729 [2024-04-17 08:33:12.969644] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.729 [2024-04-17 08:33:12.969650] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.729 [2024-04-17 08:33:12.969655] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.729 [2024-04-17 08:33:12.969660] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.729 [2024-04-17 08:33:12.969665] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.729 [2024-04-17 08:33:12.969671] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.729 [2024-04-17 08:33:12.969676] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.729 [2024-04-17 08:33:12.969681] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.729 [2024-04-17 08:33:12.969687] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.729 [2024-04-17 08:33:12.969693] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.729 [2024-04-17 08:33:12.969699] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.729 [2024-04-17 08:33:12.969704] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.729 [2024-04-17 08:33:12.969709] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.729 [2024-04-17 08:33:12.969715] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.729 [2024-04-17 08:33:12.969721] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.729 [2024-04-17 08:33:12.969726] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.729 [2024-04-17 08:33:12.969732] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.729 [2024-04-17 08:33:12.969738] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.729 [2024-04-17 08:33:12.969743] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.729 [2024-04-17 08:33:12.969749] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.729 [2024-04-17 08:33:12.969754] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.729 [2024-04-17 08:33:12.969760] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.729 [2024-04-17 08:33:12.969765] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.729 [2024-04-17 08:33:12.969770] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.729 [2024-04-17 08:33:12.969776] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.729 [2024-04-17 08:33:12.969781] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.729 [2024-04-17 08:33:12.969787] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.729 [2024-04-17 08:33:12.969792] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.729 [2024-04-17 08:33:12.969797] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.729 [2024-04-17 08:33:12.969803] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.729 [2024-04-17 08:33:12.969808] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.729 [2024-04-17 08:33:12.969814] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.729 [2024-04-17 08:33:12.969819] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.729 [2024-04-17 08:33:12.969825] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.729 [2024-04-17 08:33:12.969831] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.729 [2024-04-17 08:33:12.969837] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.729 [2024-04-17 08:33:12.969842] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.729 [2024-04-17 08:33:12.969848] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.729 [2024-04-17 08:33:12.969854] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.729 [2024-04-17 08:33:12.969859] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.729 [2024-04-17 08:33:12.969865] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.729 [2024-04-17 08:33:12.969870] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.729 [2024-04-17 08:33:12.969876] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.729 [2024-04-17 08:33:12.969882] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.729 [2024-04-17 08:33:12.969887] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.729 [2024-04-17 08:33:12.969892] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698e90 is same with the state(5) to be set 00:36:39.729 [2024-04-17 08:33:12.969972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:6800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.729 [2024-04-17 08:33:12.970008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.729 [2024-04-17 08:33:12.970029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:39520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.729 [2024-04-17 08:33:12.970036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.729 [2024-04-17 08:33:12.970045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:55048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.729 [2024-04-17 08:33:12.970052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.729 [2024-04-17 08:33:12.970068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:16296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.729 [2024-04-17 08:33:12.970076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.729 [2024-04-17 08:33:12.970084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:72560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.729 [2024-04-17 08:33:12.970091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.729 [2024-04-17 08:33:12.970099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:40536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.729 [2024-04-17 08:33:12.970119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.729 [2024-04-17 08:33:12.970128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:112000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.729 [2024-04-17 08:33:12.970135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.729 [2024-04-17 08:33:12.970143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:13008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.729 [2024-04-17 08:33:12.970149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.729 [2024-04-17 08:33:12.970163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:96776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.729 [2024-04-17 08:33:12.970170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.729 [2024-04-17 08:33:12.970178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:100112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.729 [2024-04-17 08:33:12.970185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.729 [2024-04-17 08:33:12.970208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:8104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.729 [2024-04-17 08:33:12.970216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.729 [2024-04-17 08:33:12.970224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:104680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.729 [2024-04-17 08:33:12.970231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.729 [2024-04-17 08:33:12.970240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:110904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.729 [2024-04-17 08:33:12.970246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.729 [2024-04-17 08:33:12.970269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:71864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.729 [2024-04-17 08:33:12.970276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.729 [2024-04-17 08:33:12.970285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:6184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.729 [2024-04-17 08:33:12.970291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.729 [2024-04-17 08:33:12.970300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:57304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.729 [2024-04-17 08:33:12.970316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.729 [2024-04-17 08:33:12.970324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:16208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.729 [2024-04-17 08:33:12.970334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.729 [2024-04-17 08:33:12.970348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:122640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.729 [2024-04-17 08:33:12.970355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.729 [2024-04-17 08:33:12.970363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:77072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.729 [2024-04-17 08:33:12.970370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.729 [2024-04-17 08:33:12.970378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:91424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.729 [2024-04-17 08:33:12.970385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.729 [2024-04-17 08:33:12.970407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:128544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.729 [2024-04-17 08:33:12.970414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.729 [2024-04-17 08:33:12.970423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:113992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.729 [2024-04-17 08:33:12.970429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.729 [2024-04-17 08:33:12.970437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:42992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.729 [2024-04-17 08:33:12.970447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.729 [2024-04-17 08:33:12.970455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:34224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.729 [2024-04-17 08:33:12.970461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.729 [2024-04-17 08:33:12.970488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:68352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.729 [2024-04-17 08:33:12.970495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.729 [2024-04-17 08:33:12.970504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:48648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.729 [2024-04-17 08:33:12.970510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.729 [2024-04-17 08:33:12.970525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:15816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.729 [2024-04-17 08:33:12.970531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.729 [2024-04-17 08:33:12.970540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:60912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.729 [2024-04-17 08:33:12.970547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.729 [2024-04-17 08:33:12.970555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:26312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.729 [2024-04-17 08:33:12.970571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.729 [2024-04-17 08:33:12.970580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:58248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.729 [2024-04-17 08:33:12.970586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.729 [2024-04-17 08:33:12.970595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:89928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.729 [2024-04-17 08:33:12.970602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.729 [2024-04-17 08:33:12.970610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:117224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.729 [2024-04-17 08:33:12.970617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.729 [2024-04-17 08:33:12.970625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:30152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.729 [2024-04-17 08:33:12.970644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.729 [2024-04-17 08:33:12.970654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:30608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.729 [2024-04-17 08:33:12.970661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.729 [2024-04-17 08:33:12.970669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:116864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.729 [2024-04-17 08:33:12.970686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.729 [2024-04-17 08:33:12.970694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:29952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.729 [2024-04-17 08:33:12.970701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.729 [2024-04-17 08:33:12.970710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:112592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.729 [2024-04-17 08:33:12.970723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.729 [2024-04-17 08:33:12.970732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:64320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.729 [2024-04-17 08:33:12.970739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.729 [2024-04-17 08:33:12.970747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:69880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.729 [2024-04-17 08:33:12.970754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.729 [2024-04-17 08:33:12.970762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:92720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.729 [2024-04-17 08:33:12.970768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.729 [2024-04-17 08:33:12.970785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:45120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.729 [2024-04-17 08:33:12.970792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.729 [2024-04-17 08:33:12.970801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:79696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.729 [2024-04-17 08:33:12.970807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.729 [2024-04-17 08:33:12.970816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:31688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.729 [2024-04-17 08:33:12.970822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.729 [2024-04-17 08:33:12.970831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:5096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.729 [2024-04-17 08:33:12.970837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.729 [2024-04-17 08:33:12.970846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:5600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.729 [2024-04-17 08:33:12.970852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.729 [2024-04-17 08:33:12.970861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:52208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.729 [2024-04-17 08:33:12.970872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.729 [2024-04-17 08:33:12.970881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:124808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.729 [2024-04-17 08:33:12.970887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.729 [2024-04-17 08:33:12.970895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:97312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.729 [2024-04-17 08:33:12.970906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.729 [2024-04-17 08:33:12.970914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:33536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.729 [2024-04-17 08:33:12.970922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.730 [2024-04-17 08:33:12.970931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:114904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.730 [2024-04-17 08:33:12.970937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.730 [2024-04-17 08:33:12.970958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:48720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.730 [2024-04-17 08:33:12.970965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.730 [2024-04-17 08:33:12.970973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:48208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.730 [2024-04-17 08:33:12.970980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.730 [2024-04-17 08:33:12.970994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:17800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.730 [2024-04-17 08:33:12.971001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.730 [2024-04-17 08:33:12.971009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:38216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.730 [2024-04-17 08:33:12.971015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.730 [2024-04-17 08:33:12.971024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:42400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.730 [2024-04-17 08:33:12.971030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.730 [2024-04-17 08:33:12.971038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:52584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.730 [2024-04-17 08:33:12.971063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.730 [2024-04-17 08:33:12.971077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:11960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.730 [2024-04-17 08:33:12.971084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.730 [2024-04-17 08:33:12.971098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:45016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.730 [2024-04-17 08:33:12.971105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.730 [2024-04-17 08:33:12.971113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:77472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.730 [2024-04-17 08:33:12.971134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.730 [2024-04-17 08:33:12.971143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:47064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.730 [2024-04-17 08:33:12.971150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.730 [2024-04-17 08:33:12.971158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:41840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.730 [2024-04-17 08:33:12.971164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.730 [2024-04-17 08:33:12.971184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:52960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.730 [2024-04-17 08:33:12.971191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.730 [2024-04-17 08:33:12.971200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:124384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.730 [2024-04-17 08:33:12.971206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.730 [2024-04-17 08:33:12.971228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:115600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.730 [2024-04-17 08:33:12.971236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.730 [2024-04-17 08:33:12.971244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:19448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.730 [2024-04-17 08:33:12.971254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.730 [2024-04-17 08:33:12.971262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:30704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.730 [2024-04-17 08:33:12.971287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.730 [2024-04-17 08:33:12.971299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:9080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.730 [2024-04-17 08:33:12.971317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.730 [2024-04-17 08:33:12.971327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:51208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.730 [2024-04-17 08:33:12.971340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.730 [2024-04-17 08:33:12.971349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:115760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.730 [2024-04-17 08:33:12.971355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.730 [2024-04-17 08:33:12.971364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:2136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.730 [2024-04-17 08:33:12.971371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.730 [2024-04-17 08:33:12.971383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:113384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.730 [2024-04-17 08:33:12.971390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.730 [2024-04-17 08:33:12.971398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:56392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.730 [2024-04-17 08:33:12.971405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.730 [2024-04-17 08:33:12.971413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:93040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.730 [2024-04-17 08:33:12.971426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.730 [2024-04-17 08:33:12.971435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:83136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.730 [2024-04-17 08:33:12.971442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.730 [2024-04-17 08:33:12.971450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:19288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.730 [2024-04-17 08:33:12.971457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.730 [2024-04-17 08:33:12.971465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:67240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.730 [2024-04-17 08:33:12.971471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.730 [2024-04-17 08:33:12.971486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:5240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.730 [2024-04-17 08:33:12.971493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.730 [2024-04-17 08:33:12.971501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:51544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.730 [2024-04-17 08:33:12.971508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.730 [2024-04-17 08:33:12.971516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:69472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.730 [2024-04-17 08:33:12.971534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.730 [2024-04-17 08:33:12.971543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:36128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.730 [2024-04-17 08:33:12.971549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.730 [2024-04-17 08:33:12.971557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:128432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.730 [2024-04-17 08:33:12.971565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.730 [2024-04-17 08:33:12.971574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:14816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.730 [2024-04-17 08:33:12.971595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.730 [2024-04-17 08:33:12.971604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:118008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.730 [2024-04-17 08:33:12.971611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.730 [2024-04-17 08:33:12.971619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:17720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.730 [2024-04-17 08:33:12.971626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.730 [2024-04-17 08:33:12.971634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:129248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.730 [2024-04-17 08:33:12.971641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.730 [2024-04-17 08:33:12.971656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:94592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.730 [2024-04-17 08:33:12.971663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.730 [2024-04-17 08:33:12.971671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:9680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.730 [2024-04-17 08:33:12.971677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.730 [2024-04-17 08:33:12.971686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:99288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.730 [2024-04-17 08:33:12.971698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.730 [2024-04-17 08:33:12.971707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:27488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.730 [2024-04-17 08:33:12.971714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.730 [2024-04-17 08:33:12.971722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:70384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.730 [2024-04-17 08:33:12.971729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.730 [2024-04-17 08:33:12.971743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:108272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.730 [2024-04-17 08:33:12.971750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.730 [2024-04-17 08:33:12.971759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:60776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.730 [2024-04-17 08:33:12.971765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.730 [2024-04-17 08:33:12.971777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:11792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.730 [2024-04-17 08:33:12.971784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.730 [2024-04-17 08:33:12.971793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:55392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.730 [2024-04-17 08:33:12.971799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.730 [2024-04-17 08:33:12.971807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:104688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.730 [2024-04-17 08:33:12.971814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.730 [2024-04-17 08:33:12.971822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:1520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.730 [2024-04-17 08:33:12.971835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.730 [2024-04-17 08:33:12.971844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:107816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.730 [2024-04-17 08:33:12.971852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.730 [2024-04-17 08:33:12.971861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:58256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.730 [2024-04-17 08:33:12.971867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.730 [2024-04-17 08:33:12.971876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:118752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.730 [2024-04-17 08:33:12.971887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.730 [2024-04-17 08:33:12.971896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:106704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.730 [2024-04-17 08:33:12.971902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.730 [2024-04-17 08:33:12.971911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:75912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.730 [2024-04-17 08:33:12.971917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.730 [2024-04-17 08:33:12.971926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:85416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.730 [2024-04-17 08:33:12.971932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.730 [2024-04-17 08:33:12.971956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:129880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.730 [2024-04-17 08:33:12.971963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.730 [2024-04-17 08:33:12.971984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:53528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.730 [2024-04-17 08:33:12.971995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.730 [2024-04-17 08:33:12.972009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:95128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.730 [2024-04-17 08:33:12.972017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.730 [2024-04-17 08:33:12.972025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:128872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.730 [2024-04-17 08:33:12.972032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.730 [2024-04-17 08:33:12.972047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:87656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.730 [2024-04-17 08:33:12.972055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.730 [2024-04-17 08:33:12.972063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:31000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.730 [2024-04-17 08:33:12.972069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.730 [2024-04-17 08:33:12.972077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:87312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.730 [2024-04-17 08:33:12.972084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.730 [2024-04-17 08:33:12.972099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:80120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.730 [2024-04-17 08:33:12.972105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.730 [2024-04-17 08:33:12.972114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:96160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.730 [2024-04-17 08:33:12.972120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.730 [2024-04-17 08:33:12.972139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:129760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.730 [2024-04-17 08:33:12.972147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.730 [2024-04-17 08:33:12.972155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:57176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.730 [2024-04-17 08:33:12.972163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.730 [2024-04-17 08:33:12.972171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:53784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.731 [2024-04-17 08:33:12.972183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.731 [2024-04-17 08:33:12.972192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.731 [2024-04-17 08:33:12.972198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.731 [2024-04-17 08:33:12.972207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.731 [2024-04-17 08:33:12.972213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.731 [2024-04-17 08:33:12.972222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:71736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.731 [2024-04-17 08:33:12.972235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.731 [2024-04-17 08:33:12.972243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:72168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.731 [2024-04-17 08:33:12.972250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.731 [2024-04-17 08:33:12.972258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:77952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.731 [2024-04-17 08:33:12.972265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.731 [2024-04-17 08:33:12.972273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:37112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.731 [2024-04-17 08:33:12.972285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.731 [2024-04-17 08:33:12.972294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.731 [2024-04-17 08:33:12.972300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.731 [2024-04-17 08:33:12.972333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:126776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.731 [2024-04-17 08:33:12.972340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.731 [2024-04-17 08:33:12.972349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:93880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.731 [2024-04-17 08:33:12.972355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.731 [2024-04-17 08:33:12.972364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:104120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.731 [2024-04-17 08:33:12.972370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.731 [2024-04-17 08:33:12.972399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:77472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.731 [2024-04-17 08:33:12.972407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.731 [2024-04-17 08:33:12.972416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:92192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.731 [2024-04-17 08:33:12.972422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.731 [2024-04-17 08:33:12.972431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.731 [2024-04-17 08:33:12.972437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.731 [2024-04-17 08:33:12.972445] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x663560 is same with the state(5) to be set 00:36:39.731 [2024-04-17 08:33:12.972470] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:36:39.731 [2024-04-17 08:33:12.972475] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:36:39.731 [2024-04-17 08:33:12.972484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:47360 len:8 PRP1 0x0 PRP2 0x0 00:36:39.731 [2024-04-17 08:33:12.972491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.731 [2024-04-17 08:33:12.972549] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x663560 was disconnected and freed. reset controller. 00:36:39.731 [2024-04-17 08:33:12.972846] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:39.731 [2024-04-17 08:33:12.972945] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61b3d0 (9): Bad file descriptor 00:36:39.731 [2024-04-17 08:33:12.973035] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.731 [2024-04-17 08:33:12.973083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.731 [2024-04-17 08:33:12.973121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.731 [2024-04-17 08:33:12.973132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61b3d0 with addr=10.0.0.2, port=4420 00:36:39.731 [2024-04-17 08:33:12.973139] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b3d0 is same with the state(5) to be set 00:36:39.731 [2024-04-17 08:33:12.973153] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61b3d0 (9): Bad file descriptor 00:36:39.731 [2024-04-17 08:33:12.973165] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:39.731 [2024-04-17 08:33:12.973171] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:39.731 [2024-04-17 08:33:12.973179] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:39.731 [2024-04-17 08:33:12.973205] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:39.731 [2024-04-17 08:33:12.973212] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:39.731 08:33:12 -- host/timeout.sh@128 -- # wait 73999 00:36:41.645 [2024-04-17 08:33:14.969526] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.645 [2024-04-17 08:33:14.969620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.645 [2024-04-17 08:33:14.969647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.645 [2024-04-17 08:33:14.969656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61b3d0 with addr=10.0.0.2, port=4420 00:36:41.645 [2024-04-17 08:33:14.969667] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b3d0 is same with the state(5) to be set 00:36:41.645 [2024-04-17 08:33:14.969690] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61b3d0 (9): Bad file descriptor 00:36:41.645 [2024-04-17 08:33:14.969712] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:41.645 [2024-04-17 08:33:14.969719] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:41.645 [2024-04-17 08:33:14.969727] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:41.645 [2024-04-17 08:33:14.969750] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:41.645 [2024-04-17 08:33:14.969758] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:44.173 [2024-04-17 08:33:16.966117] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.173 [2024-04-17 08:33:16.966211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.174 [2024-04-17 08:33:16.966239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:44.174 [2024-04-17 08:33:16.966249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61b3d0 with addr=10.0.0.2, port=4420 00:36:44.174 [2024-04-17 08:33:16.966260] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61b3d0 is same with the state(5) to be set 00:36:44.174 [2024-04-17 08:33:16.966289] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61b3d0 (9): Bad file descriptor 00:36:44.174 [2024-04-17 08:33:16.966317] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:44.174 [2024-04-17 08:33:16.966325] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:44.174 [2024-04-17 08:33:16.966333] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:44.174 [2024-04-17 08:33:16.966356] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:44.174 [2024-04-17 08:33:16.966364] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:46.073 [2024-04-17 08:33:18.962598] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:46.638 00:36:46.638 Latency(us) 00:36:46.638 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:46.638 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:36:46.638 NVMe0n1 : 8.11 2208.86 8.63 15.79 0.00 57577.54 7240.44 7033243.39 00:36:46.638 =================================================================================================================== 00:36:46.638 Total : 2208.86 8.63 15.79 0.00 57577.54 7240.44 7033243.39 00:36:46.638 0 00:36:46.896 08:33:19 -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:36:46.896 Attaching 5 probes... 00:36:46.896 1206.653577: reset bdev controller NVMe0 00:36:46.896 1206.797230: reconnect bdev controller NVMe0 00:36:46.896 3203.208726: reconnect delay bdev controller NVMe0 00:36:46.896 3203.233669: reconnect bdev controller NVMe0 00:36:46.896 5199.791568: reconnect delay bdev controller NVMe0 00:36:46.896 5199.815539: reconnect bdev controller NVMe0 00:36:46.896 7196.368215: reconnect delay bdev controller NVMe0 00:36:46.896 7196.394215: reconnect bdev controller NVMe0 00:36:46.896 08:33:19 -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:36:46.896 08:33:19 -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:36:46.897 08:33:19 -- host/timeout.sh@136 -- # kill 73958 00:36:46.897 08:33:19 -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:36:46.897 08:33:19 -- host/timeout.sh@139 -- # killprocess 73942 00:36:46.897 08:33:19 -- common/autotest_common.sh@926 -- # '[' -z 73942 ']' 00:36:46.897 08:33:19 -- common/autotest_common.sh@930 -- # kill -0 73942 00:36:46.897 08:33:19 -- common/autotest_common.sh@931 -- # uname 00:36:46.897 08:33:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:36:46.897 08:33:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 73942 00:36:46.897 killing process with pid 73942 00:36:46.897 Received shutdown signal, test time was about 8.170972 seconds 00:36:46.897 00:36:46.897 Latency(us) 00:36:46.897 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:46.897 =================================================================================================================== 00:36:46.897 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:46.897 08:33:20 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:36:46.897 08:33:20 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:36:46.897 08:33:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 73942' 00:36:46.897 08:33:20 -- common/autotest_common.sh@945 -- # kill 73942 00:36:46.897 08:33:20 -- common/autotest_common.sh@950 -- # wait 73942 00:36:47.154 08:33:20 -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:47.154 08:33:20 -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:36:47.154 08:33:20 -- host/timeout.sh@145 -- # nvmftestfini 00:36:47.154 08:33:20 -- nvmf/common.sh@476 -- # nvmfcleanup 00:36:47.154 08:33:20 -- nvmf/common.sh@116 -- # sync 00:36:47.412 08:33:20 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:36:47.412 08:33:20 -- nvmf/common.sh@119 -- # set +e 00:36:47.412 08:33:20 -- nvmf/common.sh@120 -- # for i in {1..20} 00:36:47.412 08:33:20 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:36:47.412 rmmod nvme_tcp 00:36:47.412 rmmod nvme_fabrics 00:36:47.412 rmmod nvme_keyring 00:36:47.412 08:33:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:36:47.412 08:33:20 -- nvmf/common.sh@123 -- # set -e 00:36:47.412 08:33:20 -- nvmf/common.sh@124 -- # return 0 00:36:47.412 08:33:20 -- nvmf/common.sh@477 -- # '[' -n 73505 ']' 00:36:47.412 08:33:20 -- nvmf/common.sh@478 -- # killprocess 73505 00:36:47.412 08:33:20 -- common/autotest_common.sh@926 -- # '[' -z 73505 ']' 00:36:47.412 08:33:20 -- common/autotest_common.sh@930 -- # kill -0 73505 00:36:47.412 08:33:20 -- common/autotest_common.sh@931 -- # uname 00:36:47.412 08:33:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:36:47.412 08:33:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 73505 00:36:47.412 killing process with pid 73505 00:36:47.412 08:33:20 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:36:47.412 08:33:20 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:36:47.412 08:33:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 73505' 00:36:47.412 08:33:20 -- common/autotest_common.sh@945 -- # kill 73505 00:36:47.412 08:33:20 -- common/autotest_common.sh@950 -- # wait 73505 00:36:47.670 08:33:20 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:36:47.670 08:33:20 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:36:47.670 08:33:20 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:36:47.670 08:33:20 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:47.670 08:33:20 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:36:47.670 08:33:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:47.670 08:33:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:47.670 08:33:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:47.670 08:33:20 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:36:47.670 00:36:47.670 real 0m46.618s 00:36:47.670 user 2m16.793s 00:36:47.670 sys 0m5.129s 00:36:47.670 08:33:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:47.670 08:33:20 -- common/autotest_common.sh@10 -- # set +x 00:36:47.670 ************************************ 00:36:47.670 END TEST nvmf_timeout 00:36:47.670 ************************************ 00:36:47.670 08:33:20 -- nvmf/nvmf.sh@119 -- # [[ virt == phy ]] 00:36:47.670 08:33:20 -- nvmf/nvmf.sh@126 -- # timing_exit host 00:36:47.670 08:33:20 -- common/autotest_common.sh@718 -- # xtrace_disable 00:36:47.670 08:33:20 -- common/autotest_common.sh@10 -- # set +x 00:36:47.670 08:33:20 -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:36:47.670 00:36:47.670 real 10m22.520s 00:36:47.670 user 29m10.049s 00:36:47.670 sys 2m57.240s 00:36:47.670 08:33:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:47.670 08:33:20 -- common/autotest_common.sh@10 -- # set +x 00:36:47.670 ************************************ 00:36:47.670 END TEST nvmf_tcp 00:36:47.670 ************************************ 00:36:47.670 08:33:20 -- spdk/autotest.sh@296 -- # [[ 1 -eq 0 ]] 00:36:47.670 08:33:20 -- spdk/autotest.sh@300 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:36:47.670 08:33:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:36:47.670 08:33:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:36:47.670 08:33:20 -- common/autotest_common.sh@10 -- # set +x 00:36:47.670 ************************************ 00:36:47.670 START TEST nvmf_dif 00:36:47.670 ************************************ 00:36:47.670 08:33:20 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:36:47.927 * Looking for test storage... 00:36:47.927 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:36:47.927 08:33:21 -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:36:47.927 08:33:21 -- nvmf/common.sh@7 -- # uname -s 00:36:47.927 08:33:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:47.927 08:33:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:47.927 08:33:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:47.927 08:33:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:47.927 08:33:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:47.927 08:33:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:47.927 08:33:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:47.927 08:33:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:47.927 08:33:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:47.927 08:33:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:47.928 08:33:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ce38300-f67f-48af-81f9-d51a7c54746d 00:36:47.928 08:33:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ce38300-f67f-48af-81f9-d51a7c54746d 00:36:47.928 08:33:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:47.928 08:33:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:47.928 08:33:21 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:36:47.928 08:33:21 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:36:47.928 08:33:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:47.928 08:33:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:47.928 08:33:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:47.928 08:33:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:47.928 08:33:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:47.928 08:33:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:47.928 08:33:21 -- paths/export.sh@5 -- # export PATH 00:36:47.928 08:33:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:47.928 08:33:21 -- nvmf/common.sh@46 -- # : 0 00:36:47.928 08:33:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:36:47.928 08:33:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:36:47.928 08:33:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:36:47.928 08:33:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:47.928 08:33:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:47.928 08:33:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:36:47.928 08:33:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:36:47.928 08:33:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:36:47.928 08:33:21 -- target/dif.sh@15 -- # NULL_META=16 00:36:47.928 08:33:21 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:36:47.928 08:33:21 -- target/dif.sh@15 -- # NULL_SIZE=64 00:36:47.928 08:33:21 -- target/dif.sh@15 -- # NULL_DIF=1 00:36:47.928 08:33:21 -- target/dif.sh@135 -- # nvmftestinit 00:36:47.928 08:33:21 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:36:47.928 08:33:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:47.928 08:33:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:36:47.928 08:33:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:36:47.928 08:33:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:36:47.928 08:33:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:47.928 08:33:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:47.928 08:33:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:47.928 08:33:21 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:36:47.928 08:33:21 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:36:47.928 08:33:21 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:36:47.928 08:33:21 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:36:47.928 08:33:21 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:36:47.928 08:33:21 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:36:47.928 08:33:21 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:47.928 08:33:21 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:47.928 08:33:21 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:36:47.928 08:33:21 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:36:47.928 08:33:21 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:36:47.928 08:33:21 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:36:47.928 08:33:21 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:36:47.928 08:33:21 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:47.928 08:33:21 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:36:47.928 08:33:21 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:36:47.928 08:33:21 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:36:47.928 08:33:21 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:36:47.928 08:33:21 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:36:47.928 08:33:21 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:36:47.928 Cannot find device "nvmf_tgt_br" 00:36:47.928 08:33:21 -- nvmf/common.sh@154 -- # true 00:36:47.928 08:33:21 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:36:47.928 Cannot find device "nvmf_tgt_br2" 00:36:47.928 08:33:21 -- nvmf/common.sh@155 -- # true 00:36:47.928 08:33:21 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:36:47.928 08:33:21 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:36:47.928 Cannot find device "nvmf_tgt_br" 00:36:47.928 08:33:21 -- nvmf/common.sh@157 -- # true 00:36:47.928 08:33:21 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:36:47.928 Cannot find device "nvmf_tgt_br2" 00:36:47.928 08:33:21 -- nvmf/common.sh@158 -- # true 00:36:47.928 08:33:21 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:36:47.928 08:33:21 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:36:47.928 08:33:21 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:36:47.928 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:36:47.928 08:33:21 -- nvmf/common.sh@161 -- # true 00:36:47.928 08:33:21 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:36:47.928 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:36:47.928 08:33:21 -- nvmf/common.sh@162 -- # true 00:36:47.928 08:33:21 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:36:47.928 08:33:21 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:36:47.928 08:33:21 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:36:48.185 08:33:21 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:36:48.185 08:33:21 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:36:48.185 08:33:21 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:36:48.185 08:33:21 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:36:48.185 08:33:21 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:36:48.185 08:33:21 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:36:48.185 08:33:21 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:36:48.185 08:33:21 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:36:48.185 08:33:21 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:36:48.185 08:33:21 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:36:48.185 08:33:21 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:36:48.185 08:33:21 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:36:48.185 08:33:21 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:36:48.185 08:33:21 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:36:48.185 08:33:21 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:36:48.185 08:33:21 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:36:48.185 08:33:21 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:36:48.185 08:33:21 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:36:48.185 08:33:21 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:36:48.185 08:33:21 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:36:48.185 08:33:21 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:36:48.185 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:48.185 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:36:48.185 00:36:48.185 --- 10.0.0.2 ping statistics --- 00:36:48.185 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:48.185 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:36:48.185 08:33:21 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:36:48.185 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:36:48.185 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:36:48.185 00:36:48.186 --- 10.0.0.3 ping statistics --- 00:36:48.186 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:48.186 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:36:48.186 08:33:21 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:36:48.186 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:48.186 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:36:48.186 00:36:48.186 --- 10.0.0.1 ping statistics --- 00:36:48.186 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:48.186 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:36:48.186 08:33:21 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:48.186 08:33:21 -- nvmf/common.sh@421 -- # return 0 00:36:48.186 08:33:21 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:36:48.186 08:33:21 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:36:48.443 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:48.443 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:36:48.443 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:36:48.443 08:33:21 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:48.443 08:33:21 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:36:48.443 08:33:21 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:36:48.443 08:33:21 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:48.443 08:33:21 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:36:48.443 08:33:21 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:36:48.443 08:33:21 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:36:48.443 08:33:21 -- target/dif.sh@137 -- # nvmfappstart 00:36:48.443 08:33:21 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:36:48.443 08:33:21 -- common/autotest_common.sh@712 -- # xtrace_disable 00:36:48.443 08:33:21 -- common/autotest_common.sh@10 -- # set +x 00:36:48.443 08:33:21 -- nvmf/common.sh@469 -- # nvmfpid=74439 00:36:48.443 08:33:21 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:36:48.443 08:33:21 -- nvmf/common.sh@470 -- # waitforlisten 74439 00:36:48.443 08:33:21 -- common/autotest_common.sh@819 -- # '[' -z 74439 ']' 00:36:48.443 08:33:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:48.443 08:33:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:36:48.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:48.443 08:33:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:48.443 08:33:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:36:48.443 08:33:21 -- common/autotest_common.sh@10 -- # set +x 00:36:48.700 [2024-04-17 08:33:21.795879] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:36:48.700 [2024-04-17 08:33:21.795997] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:48.700 [2024-04-17 08:33:21.940905] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:48.957 [2024-04-17 08:33:22.047019] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:36:48.957 [2024-04-17 08:33:22.047164] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:48.957 [2024-04-17 08:33:22.047173] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:48.957 [2024-04-17 08:33:22.047179] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:48.957 [2024-04-17 08:33:22.047204] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:49.523 08:33:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:36:49.523 08:33:22 -- common/autotest_common.sh@852 -- # return 0 00:36:49.523 08:33:22 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:36:49.523 08:33:22 -- common/autotest_common.sh@718 -- # xtrace_disable 00:36:49.523 08:33:22 -- common/autotest_common.sh@10 -- # set +x 00:36:49.523 08:33:22 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:49.523 08:33:22 -- target/dif.sh@139 -- # create_transport 00:36:49.523 08:33:22 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:36:49.523 08:33:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:49.523 08:33:22 -- common/autotest_common.sh@10 -- # set +x 00:36:49.523 [2024-04-17 08:33:22.735965] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:49.523 08:33:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:49.523 08:33:22 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:36:49.523 08:33:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:36:49.523 08:33:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:36:49.523 08:33:22 -- common/autotest_common.sh@10 -- # set +x 00:36:49.523 ************************************ 00:36:49.523 START TEST fio_dif_1_default 00:36:49.523 ************************************ 00:36:49.523 08:33:22 -- common/autotest_common.sh@1104 -- # fio_dif_1 00:36:49.523 08:33:22 -- target/dif.sh@86 -- # create_subsystems 0 00:36:49.523 08:33:22 -- target/dif.sh@28 -- # local sub 00:36:49.523 08:33:22 -- target/dif.sh@30 -- # for sub in "$@" 00:36:49.523 08:33:22 -- target/dif.sh@31 -- # create_subsystem 0 00:36:49.523 08:33:22 -- target/dif.sh@18 -- # local sub_id=0 00:36:49.523 08:33:22 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:49.523 08:33:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:49.523 08:33:22 -- common/autotest_common.sh@10 -- # set +x 00:36:49.523 bdev_null0 00:36:49.523 08:33:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:49.523 08:33:22 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:49.523 08:33:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:49.523 08:33:22 -- common/autotest_common.sh@10 -- # set +x 00:36:49.523 08:33:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:49.523 08:33:22 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:49.523 08:33:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:49.523 08:33:22 -- common/autotest_common.sh@10 -- # set +x 00:36:49.523 08:33:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:49.523 08:33:22 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:49.523 08:33:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:49.523 08:33:22 -- common/autotest_common.sh@10 -- # set +x 00:36:49.523 [2024-04-17 08:33:22.791990] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:49.523 08:33:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:49.523 08:33:22 -- target/dif.sh@87 -- # fio /dev/fd/62 00:36:49.523 08:33:22 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:49.523 08:33:22 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:36:49.523 08:33:22 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:49.523 08:33:22 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:36:49.523 08:33:22 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:49.523 08:33:22 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:49.523 08:33:22 -- common/autotest_common.sh@1318 -- # local sanitizers 00:36:49.523 08:33:22 -- target/dif.sh@82 -- # gen_fio_conf 00:36:49.523 08:33:22 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:36:49.523 08:33:22 -- common/autotest_common.sh@1320 -- # shift 00:36:49.523 08:33:22 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:36:49.523 08:33:22 -- target/dif.sh@54 -- # local file 00:36:49.523 08:33:22 -- nvmf/common.sh@520 -- # config=() 00:36:49.523 08:33:22 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:36:49.523 08:33:22 -- target/dif.sh@56 -- # cat 00:36:49.523 08:33:22 -- nvmf/common.sh@520 -- # local subsystem config 00:36:49.523 08:33:22 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:36:49.523 08:33:22 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:36:49.523 { 00:36:49.523 "params": { 00:36:49.523 "name": "Nvme$subsystem", 00:36:49.524 "trtype": "$TEST_TRANSPORT", 00:36:49.524 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:49.524 "adrfam": "ipv4", 00:36:49.524 "trsvcid": "$NVMF_PORT", 00:36:49.524 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:49.524 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:49.524 "hdgst": ${hdgst:-false}, 00:36:49.524 "ddgst": ${ddgst:-false} 00:36:49.524 }, 00:36:49.524 "method": "bdev_nvme_attach_controller" 00:36:49.524 } 00:36:49.524 EOF 00:36:49.524 )") 00:36:49.524 08:33:22 -- nvmf/common.sh@542 -- # cat 00:36:49.524 08:33:22 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:36:49.524 08:33:22 -- target/dif.sh@72 -- # (( file = 1 )) 00:36:49.524 08:33:22 -- target/dif.sh@72 -- # (( file <= files )) 00:36:49.524 08:33:22 -- common/autotest_common.sh@1324 -- # grep libasan 00:36:49.524 08:33:22 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:36:49.524 08:33:22 -- nvmf/common.sh@544 -- # jq . 00:36:49.524 08:33:22 -- nvmf/common.sh@545 -- # IFS=, 00:36:49.524 08:33:22 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:36:49.524 "params": { 00:36:49.524 "name": "Nvme0", 00:36:49.524 "trtype": "tcp", 00:36:49.524 "traddr": "10.0.0.2", 00:36:49.524 "adrfam": "ipv4", 00:36:49.524 "trsvcid": "4420", 00:36:49.524 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:49.524 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:49.524 "hdgst": false, 00:36:49.524 "ddgst": false 00:36:49.524 }, 00:36:49.524 "method": "bdev_nvme_attach_controller" 00:36:49.524 }' 00:36:49.524 08:33:22 -- common/autotest_common.sh@1324 -- # asan_lib= 00:36:49.524 08:33:22 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:36:49.524 08:33:22 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:36:49.524 08:33:22 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:36:49.524 08:33:22 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:36:49.524 08:33:22 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:36:49.782 08:33:22 -- common/autotest_common.sh@1324 -- # asan_lib= 00:36:49.782 08:33:22 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:36:49.782 08:33:22 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:36:49.782 08:33:22 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:49.782 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:49.782 fio-3.35 00:36:49.782 Starting 1 thread 00:36:50.347 [2024-04-17 08:33:23.388721] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:36:50.347 [2024-04-17 08:33:23.388782] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:37:00.347 00:37:00.347 filename0: (groupid=0, jobs=1): err= 0: pid=74510: Wed Apr 17 08:33:33 2024 00:37:00.347 read: IOPS=10.0k, BW=39.1MiB/s (41.0MB/s)(391MiB/10001msec) 00:37:00.347 slat (nsec): min=4195, max=35823, avg=7630.51, stdev=1155.16 00:37:00.347 clat (usec): min=289, max=5407, avg=378.67, stdev=39.93 00:37:00.347 lat (usec): min=295, max=5433, avg=386.30, stdev=40.12 00:37:00.347 clat percentiles (usec): 00:37:00.347 | 1.00th=[ 334], 5.00th=[ 347], 10.00th=[ 359], 20.00th=[ 363], 00:37:00.347 | 30.00th=[ 371], 40.00th=[ 375], 50.00th=[ 379], 60.00th=[ 383], 00:37:00.347 | 70.00th=[ 388], 80.00th=[ 392], 90.00th=[ 400], 95.00th=[ 408], 00:37:00.347 | 99.00th=[ 465], 99.50th=[ 478], 99.90th=[ 510], 99.95th=[ 523], 00:37:00.347 | 99.99th=[ 1401] 00:37:00.347 bw ( KiB/s): min=38560, max=42068, per=99.77%, avg=39943.79, stdev=852.86, samples=19 00:37:00.347 iops : min= 9640, max=10517, avg=9985.95, stdev=213.21, samples=19 00:37:00.347 lat (usec) : 500=99.84%, 750=0.14% 00:37:00.347 lat (msec) : 2=0.01%, 10=0.01% 00:37:00.347 cpu : usr=87.56%, sys=10.92%, ctx=31, majf=0, minf=0 00:37:00.347 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:00.347 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.347 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.347 issued rwts: total=100096,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:00.347 latency : target=0, window=0, percentile=100.00%, depth=4 00:37:00.347 00:37:00.347 Run status group 0 (all jobs): 00:37:00.347 READ: bw=39.1MiB/s (41.0MB/s), 39.1MiB/s-39.1MiB/s (41.0MB/s-41.0MB/s), io=391MiB (410MB), run=10001-10001msec 00:37:00.605 08:33:33 -- target/dif.sh@88 -- # destroy_subsystems 0 00:37:00.605 08:33:33 -- target/dif.sh@43 -- # local sub 00:37:00.605 08:33:33 -- target/dif.sh@45 -- # for sub in "$@" 00:37:00.605 08:33:33 -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:00.605 08:33:33 -- target/dif.sh@36 -- # local sub_id=0 00:37:00.605 08:33:33 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:00.605 08:33:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:00.605 08:33:33 -- common/autotest_common.sh@10 -- # set +x 00:37:00.605 08:33:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:00.605 08:33:33 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:00.605 08:33:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:00.605 08:33:33 -- common/autotest_common.sh@10 -- # set +x 00:37:00.605 08:33:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:00.605 00:37:00.605 real 0m10.993s 00:37:00.605 user 0m9.373s 00:37:00.605 sys 0m1.360s 00:37:00.605 08:33:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:00.605 08:33:33 -- common/autotest_common.sh@10 -- # set +x 00:37:00.605 ************************************ 00:37:00.605 END TEST fio_dif_1_default 00:37:00.605 ************************************ 00:37:00.606 08:33:33 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:37:00.606 08:33:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:37:00.606 08:33:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:37:00.606 08:33:33 -- common/autotest_common.sh@10 -- # set +x 00:37:00.606 ************************************ 00:37:00.606 START TEST fio_dif_1_multi_subsystems 00:37:00.606 ************************************ 00:37:00.606 08:33:33 -- common/autotest_common.sh@1104 -- # fio_dif_1_multi_subsystems 00:37:00.606 08:33:33 -- target/dif.sh@92 -- # local files=1 00:37:00.606 08:33:33 -- target/dif.sh@94 -- # create_subsystems 0 1 00:37:00.606 08:33:33 -- target/dif.sh@28 -- # local sub 00:37:00.606 08:33:33 -- target/dif.sh@30 -- # for sub in "$@" 00:37:00.606 08:33:33 -- target/dif.sh@31 -- # create_subsystem 0 00:37:00.606 08:33:33 -- target/dif.sh@18 -- # local sub_id=0 00:37:00.606 08:33:33 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:37:00.606 08:33:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:00.606 08:33:33 -- common/autotest_common.sh@10 -- # set +x 00:37:00.606 bdev_null0 00:37:00.606 08:33:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:00.606 08:33:33 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:00.606 08:33:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:00.606 08:33:33 -- common/autotest_common.sh@10 -- # set +x 00:37:00.606 08:33:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:00.606 08:33:33 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:00.606 08:33:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:00.606 08:33:33 -- common/autotest_common.sh@10 -- # set +x 00:37:00.606 08:33:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:00.606 08:33:33 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:00.606 08:33:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:00.606 08:33:33 -- common/autotest_common.sh@10 -- # set +x 00:37:00.606 [2024-04-17 08:33:33.825664] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:00.606 08:33:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:00.606 08:33:33 -- target/dif.sh@30 -- # for sub in "$@" 00:37:00.606 08:33:33 -- target/dif.sh@31 -- # create_subsystem 1 00:37:00.606 08:33:33 -- target/dif.sh@18 -- # local sub_id=1 00:37:00.606 08:33:33 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:37:00.606 08:33:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:00.606 08:33:33 -- common/autotest_common.sh@10 -- # set +x 00:37:00.606 bdev_null1 00:37:00.606 08:33:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:00.606 08:33:33 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:00.606 08:33:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:00.606 08:33:33 -- common/autotest_common.sh@10 -- # set +x 00:37:00.606 08:33:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:00.606 08:33:33 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:00.606 08:33:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:00.606 08:33:33 -- common/autotest_common.sh@10 -- # set +x 00:37:00.606 08:33:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:00.606 08:33:33 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:00.606 08:33:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:00.606 08:33:33 -- common/autotest_common.sh@10 -- # set +x 00:37:00.606 08:33:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:00.606 08:33:33 -- target/dif.sh@95 -- # fio /dev/fd/62 00:37:00.606 08:33:33 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:37:00.606 08:33:33 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:37:00.606 08:33:33 -- nvmf/common.sh@520 -- # config=() 00:37:00.606 08:33:33 -- target/dif.sh@82 -- # gen_fio_conf 00:37:00.606 08:33:33 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:00.606 08:33:33 -- nvmf/common.sh@520 -- # local subsystem config 00:37:00.606 08:33:33 -- target/dif.sh@54 -- # local file 00:37:00.606 08:33:33 -- target/dif.sh@56 -- # cat 00:37:00.606 08:33:33 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:37:00.606 08:33:33 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:00.606 08:33:33 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:37:00.606 { 00:37:00.606 "params": { 00:37:00.606 "name": "Nvme$subsystem", 00:37:00.606 "trtype": "$TEST_TRANSPORT", 00:37:00.606 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:00.606 "adrfam": "ipv4", 00:37:00.606 "trsvcid": "$NVMF_PORT", 00:37:00.606 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:00.606 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:00.606 "hdgst": ${hdgst:-false}, 00:37:00.606 "ddgst": ${ddgst:-false} 00:37:00.606 }, 00:37:00.606 "method": "bdev_nvme_attach_controller" 00:37:00.606 } 00:37:00.606 EOF 00:37:00.606 )") 00:37:00.606 08:33:33 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:37:00.606 08:33:33 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:00.606 08:33:33 -- common/autotest_common.sh@1318 -- # local sanitizers 00:37:00.606 08:33:33 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:37:00.606 08:33:33 -- common/autotest_common.sh@1320 -- # shift 00:37:00.606 08:33:33 -- nvmf/common.sh@542 -- # cat 00:37:00.606 08:33:33 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:37:00.606 08:33:33 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:37:00.606 08:33:33 -- target/dif.sh@72 -- # (( file = 1 )) 00:37:00.606 08:33:33 -- target/dif.sh@72 -- # (( file <= files )) 00:37:00.606 08:33:33 -- target/dif.sh@73 -- # cat 00:37:00.606 08:33:33 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:37:00.606 08:33:33 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:37:00.606 08:33:33 -- target/dif.sh@72 -- # (( file++ )) 00:37:00.606 08:33:33 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:37:00.606 { 00:37:00.606 "params": { 00:37:00.606 "name": "Nvme$subsystem", 00:37:00.606 "trtype": "$TEST_TRANSPORT", 00:37:00.606 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:00.606 "adrfam": "ipv4", 00:37:00.606 "trsvcid": "$NVMF_PORT", 00:37:00.606 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:00.606 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:00.606 "hdgst": ${hdgst:-false}, 00:37:00.606 "ddgst": ${ddgst:-false} 00:37:00.606 }, 00:37:00.606 "method": "bdev_nvme_attach_controller" 00:37:00.606 } 00:37:00.606 EOF 00:37:00.606 )") 00:37:00.606 08:33:33 -- target/dif.sh@72 -- # (( file <= files )) 00:37:00.606 08:33:33 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:37:00.606 08:33:33 -- common/autotest_common.sh@1324 -- # grep libasan 00:37:00.606 08:33:33 -- nvmf/common.sh@542 -- # cat 00:37:00.606 08:33:33 -- nvmf/common.sh@544 -- # jq . 00:37:00.606 08:33:33 -- nvmf/common.sh@545 -- # IFS=, 00:37:00.606 08:33:33 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:37:00.606 "params": { 00:37:00.606 "name": "Nvme0", 00:37:00.606 "trtype": "tcp", 00:37:00.606 "traddr": "10.0.0.2", 00:37:00.606 "adrfam": "ipv4", 00:37:00.606 "trsvcid": "4420", 00:37:00.606 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:00.606 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:00.606 "hdgst": false, 00:37:00.606 "ddgst": false 00:37:00.606 }, 00:37:00.606 "method": "bdev_nvme_attach_controller" 00:37:00.606 },{ 00:37:00.606 "params": { 00:37:00.606 "name": "Nvme1", 00:37:00.606 "trtype": "tcp", 00:37:00.606 "traddr": "10.0.0.2", 00:37:00.606 "adrfam": "ipv4", 00:37:00.606 "trsvcid": "4420", 00:37:00.606 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:00.606 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:00.606 "hdgst": false, 00:37:00.606 "ddgst": false 00:37:00.606 }, 00:37:00.606 "method": "bdev_nvme_attach_controller" 00:37:00.606 }' 00:37:00.606 08:33:33 -- common/autotest_common.sh@1324 -- # asan_lib= 00:37:00.606 08:33:33 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:37:00.606 08:33:33 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:37:00.606 08:33:33 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:37:00.606 08:33:33 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:37:00.606 08:33:33 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:37:00.606 08:33:33 -- common/autotest_common.sh@1324 -- # asan_lib= 00:37:00.606 08:33:33 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:37:00.606 08:33:33 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:37:00.606 08:33:33 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:00.864 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:37:00.864 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:37:00.864 fio-3.35 00:37:00.864 Starting 2 threads 00:37:01.430 [2024-04-17 08:33:34.489067] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:37:01.430 [2024-04-17 08:33:34.489122] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:37:11.465 00:37:11.465 filename0: (groupid=0, jobs=1): err= 0: pid=74670: Wed Apr 17 08:33:44 2024 00:37:11.465 read: IOPS=5585, BW=21.8MiB/s (22.9MB/s)(218MiB/10001msec) 00:37:11.465 slat (nsec): min=5861, max=73739, avg=13381.91, stdev=3187.34 00:37:11.465 clat (usec): min=498, max=5141, avg=680.78, stdev=60.37 00:37:11.465 lat (usec): min=504, max=5171, avg=694.16, stdev=61.10 00:37:11.465 clat percentiles (usec): 00:37:11.465 | 1.00th=[ 562], 5.00th=[ 594], 10.00th=[ 619], 20.00th=[ 644], 00:37:11.465 | 30.00th=[ 660], 40.00th=[ 676], 50.00th=[ 685], 60.00th=[ 693], 00:37:11.465 | 70.00th=[ 709], 80.00th=[ 717], 90.00th=[ 734], 95.00th=[ 750], 00:37:11.465 | 99.00th=[ 791], 99.50th=[ 832], 99.90th=[ 914], 99.95th=[ 947], 00:37:11.465 | 99.99th=[ 1012] 00:37:11.465 bw ( KiB/s): min=21536, max=23872, per=50.02%, avg=22362.32, stdev=642.86, samples=19 00:37:11.465 iops : min= 5384, max= 5968, avg=5590.58, stdev=160.72, samples=19 00:37:11.465 lat (usec) : 500=0.01%, 750=96.20%, 1000=3.79% 00:37:11.465 lat (msec) : 2=0.01%, 10=0.01% 00:37:11.465 cpu : usr=93.52%, sys=5.44%, ctx=12, majf=0, minf=6 00:37:11.465 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:11.465 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.465 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.465 issued rwts: total=55860,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:11.465 latency : target=0, window=0, percentile=100.00%, depth=4 00:37:11.465 filename1: (groupid=0, jobs=1): err= 0: pid=74671: Wed Apr 17 08:33:44 2024 00:37:11.465 read: IOPS=5591, BW=21.8MiB/s (22.9MB/s)(218MiB/10001msec) 00:37:11.465 slat (nsec): min=5807, max=64659, avg=12942.31, stdev=3021.41 00:37:11.465 clat (usec): min=305, max=3267, avg=681.22, stdev=50.24 00:37:11.465 lat (usec): min=312, max=3303, avg=694.17, stdev=50.80 00:37:11.465 clat percentiles (usec): 00:37:11.465 | 1.00th=[ 562], 5.00th=[ 594], 10.00th=[ 627], 20.00th=[ 652], 00:37:11.465 | 30.00th=[ 660], 40.00th=[ 676], 50.00th=[ 685], 60.00th=[ 693], 00:37:11.465 | 70.00th=[ 701], 80.00th=[ 717], 90.00th=[ 734], 95.00th=[ 742], 00:37:11.465 | 99.00th=[ 783], 99.50th=[ 824], 99.90th=[ 889], 99.95th=[ 914], 00:37:11.465 | 99.99th=[ 1004] 00:37:11.465 bw ( KiB/s): min=21536, max=23840, per=50.07%, avg=22384.21, stdev=624.85, samples=19 00:37:11.465 iops : min= 5384, max= 5960, avg=5596.05, stdev=156.21, samples=19 00:37:11.465 lat (usec) : 500=0.11%, 750=96.72%, 1000=3.16% 00:37:11.465 lat (msec) : 2=0.01%, 4=0.01% 00:37:11.465 cpu : usr=93.14%, sys=5.89%, ctx=12, majf=0, minf=9 00:37:11.465 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:11.465 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.465 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.465 issued rwts: total=55920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:11.465 latency : target=0, window=0, percentile=100.00%, depth=4 00:37:11.465 00:37:11.465 Run status group 0 (all jobs): 00:37:11.465 READ: bw=43.7MiB/s (45.8MB/s), 21.8MiB/s-21.8MiB/s (22.9MB/s-22.9MB/s), io=437MiB (458MB), run=10001-10001msec 00:37:11.724 08:33:44 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:37:11.724 08:33:44 -- target/dif.sh@43 -- # local sub 00:37:11.724 08:33:44 -- target/dif.sh@45 -- # for sub in "$@" 00:37:11.724 08:33:44 -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:11.724 08:33:44 -- target/dif.sh@36 -- # local sub_id=0 00:37:11.724 08:33:44 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:11.724 08:33:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:11.724 08:33:44 -- common/autotest_common.sh@10 -- # set +x 00:37:11.724 08:33:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:11.724 08:33:44 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:11.724 08:33:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:11.724 08:33:44 -- common/autotest_common.sh@10 -- # set +x 00:37:11.724 08:33:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:11.724 08:33:44 -- target/dif.sh@45 -- # for sub in "$@" 00:37:11.724 08:33:44 -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:11.724 08:33:44 -- target/dif.sh@36 -- # local sub_id=1 00:37:11.724 08:33:44 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:11.724 08:33:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:11.724 08:33:44 -- common/autotest_common.sh@10 -- # set +x 00:37:11.724 08:33:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:11.724 08:33:44 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:11.724 08:33:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:11.724 08:33:44 -- common/autotest_common.sh@10 -- # set +x 00:37:11.724 08:33:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:11.724 00:37:11.724 real 0m11.054s 00:37:11.724 user 0m19.345s 00:37:11.724 sys 0m1.386s 00:37:11.725 08:33:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:11.725 08:33:44 -- common/autotest_common.sh@10 -- # set +x 00:37:11.725 ************************************ 00:37:11.725 END TEST fio_dif_1_multi_subsystems 00:37:11.725 ************************************ 00:37:11.725 08:33:44 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:37:11.725 08:33:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:37:11.725 08:33:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:37:11.725 08:33:44 -- common/autotest_common.sh@10 -- # set +x 00:37:11.725 ************************************ 00:37:11.725 START TEST fio_dif_rand_params 00:37:11.725 ************************************ 00:37:11.725 08:33:44 -- common/autotest_common.sh@1104 -- # fio_dif_rand_params 00:37:11.725 08:33:44 -- target/dif.sh@100 -- # local NULL_DIF 00:37:11.725 08:33:44 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:37:11.725 08:33:44 -- target/dif.sh@103 -- # NULL_DIF=3 00:37:11.725 08:33:44 -- target/dif.sh@103 -- # bs=128k 00:37:11.725 08:33:44 -- target/dif.sh@103 -- # numjobs=3 00:37:11.725 08:33:44 -- target/dif.sh@103 -- # iodepth=3 00:37:11.725 08:33:44 -- target/dif.sh@103 -- # runtime=5 00:37:11.725 08:33:44 -- target/dif.sh@105 -- # create_subsystems 0 00:37:11.725 08:33:44 -- target/dif.sh@28 -- # local sub 00:37:11.725 08:33:44 -- target/dif.sh@30 -- # for sub in "$@" 00:37:11.725 08:33:44 -- target/dif.sh@31 -- # create_subsystem 0 00:37:11.725 08:33:44 -- target/dif.sh@18 -- # local sub_id=0 00:37:11.725 08:33:44 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:37:11.725 08:33:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:11.725 08:33:44 -- common/autotest_common.sh@10 -- # set +x 00:37:11.725 bdev_null0 00:37:11.725 08:33:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:11.725 08:33:44 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:11.725 08:33:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:11.725 08:33:44 -- common/autotest_common.sh@10 -- # set +x 00:37:11.725 08:33:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:11.725 08:33:44 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:11.725 08:33:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:11.725 08:33:44 -- common/autotest_common.sh@10 -- # set +x 00:37:11.725 08:33:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:11.725 08:33:44 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:11.725 08:33:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:11.725 08:33:44 -- common/autotest_common.sh@10 -- # set +x 00:37:11.725 [2024-04-17 08:33:44.955513] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:11.725 08:33:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:11.725 08:33:44 -- target/dif.sh@106 -- # fio /dev/fd/62 00:37:11.725 08:33:44 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:37:11.725 08:33:44 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:37:11.725 08:33:44 -- nvmf/common.sh@520 -- # config=() 00:37:11.725 08:33:44 -- nvmf/common.sh@520 -- # local subsystem config 00:37:11.725 08:33:44 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:11.725 08:33:44 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:37:11.725 08:33:44 -- target/dif.sh@82 -- # gen_fio_conf 00:37:11.725 08:33:44 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:37:11.725 { 00:37:11.725 "params": { 00:37:11.725 "name": "Nvme$subsystem", 00:37:11.725 "trtype": "$TEST_TRANSPORT", 00:37:11.725 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:11.725 "adrfam": "ipv4", 00:37:11.725 "trsvcid": "$NVMF_PORT", 00:37:11.725 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:11.725 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:11.725 "hdgst": ${hdgst:-false}, 00:37:11.725 "ddgst": ${ddgst:-false} 00:37:11.725 }, 00:37:11.725 "method": "bdev_nvme_attach_controller" 00:37:11.725 } 00:37:11.725 EOF 00:37:11.725 )") 00:37:11.725 08:33:44 -- target/dif.sh@54 -- # local file 00:37:11.725 08:33:44 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:11.725 08:33:44 -- target/dif.sh@56 -- # cat 00:37:11.725 08:33:44 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:37:11.725 08:33:44 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:11.725 08:33:44 -- common/autotest_common.sh@1318 -- # local sanitizers 00:37:11.725 08:33:44 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:37:11.725 08:33:44 -- nvmf/common.sh@542 -- # cat 00:37:11.725 08:33:44 -- common/autotest_common.sh@1320 -- # shift 00:37:11.725 08:33:44 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:37:11.725 08:33:44 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:37:11.725 08:33:44 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:37:11.725 08:33:44 -- target/dif.sh@72 -- # (( file = 1 )) 00:37:11.725 08:33:44 -- target/dif.sh@72 -- # (( file <= files )) 00:37:11.725 08:33:44 -- common/autotest_common.sh@1324 -- # grep libasan 00:37:11.725 08:33:44 -- nvmf/common.sh@544 -- # jq . 00:37:11.725 08:33:44 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:37:11.725 08:33:44 -- nvmf/common.sh@545 -- # IFS=, 00:37:11.725 08:33:44 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:37:11.725 "params": { 00:37:11.725 "name": "Nvme0", 00:37:11.725 "trtype": "tcp", 00:37:11.725 "traddr": "10.0.0.2", 00:37:11.725 "adrfam": "ipv4", 00:37:11.725 "trsvcid": "4420", 00:37:11.725 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:11.725 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:11.725 "hdgst": false, 00:37:11.725 "ddgst": false 00:37:11.725 }, 00:37:11.725 "method": "bdev_nvme_attach_controller" 00:37:11.725 }' 00:37:11.725 08:33:45 -- common/autotest_common.sh@1324 -- # asan_lib= 00:37:11.725 08:33:45 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:37:11.725 08:33:45 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:37:11.725 08:33:45 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:37:11.725 08:33:45 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:37:11.725 08:33:45 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:37:11.725 08:33:45 -- common/autotest_common.sh@1324 -- # asan_lib= 00:37:11.725 08:33:45 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:37:11.725 08:33:45 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:37:11.725 08:33:45 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:11.984 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:37:11.984 ... 00:37:11.984 fio-3.35 00:37:11.984 Starting 3 threads 00:37:12.243 [2024-04-17 08:33:45.547797] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:37:12.243 [2024-04-17 08:33:45.547837] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:37:17.517 00:37:17.517 filename0: (groupid=0, jobs=1): err= 0: pid=74827: Wed Apr 17 08:33:50 2024 00:37:17.517 read: IOPS=286, BW=35.8MiB/s (37.5MB/s)(179MiB/5006msec) 00:37:17.517 slat (usec): min=6, max=123, avg=27.87, stdev=17.70 00:37:17.517 clat (usec): min=8068, max=28216, avg=10404.70, stdev=1044.89 00:37:17.517 lat (usec): min=8076, max=28253, avg=10432.56, stdev=1047.05 00:37:17.517 clat percentiles (usec): 00:37:17.517 | 1.00th=[ 8979], 5.00th=[ 9110], 10.00th=[ 9503], 20.00th=[ 9896], 00:37:17.517 | 30.00th=[10159], 40.00th=[10290], 50.00th=[10421], 60.00th=[10552], 00:37:17.517 | 70.00th=[10683], 80.00th=[10814], 90.00th=[10945], 95.00th=[11207], 00:37:17.517 | 99.00th=[12518], 99.50th=[12780], 99.90th=[28181], 99.95th=[28181], 00:37:17.517 | 99.99th=[28181] 00:37:17.517 bw ( KiB/s): min=35328, max=37632, per=33.35%, avg=36633.60, stdev=890.50, samples=10 00:37:17.517 iops : min= 276, max= 294, avg=286.20, stdev= 6.96, samples=10 00:37:17.517 lat (msec) : 10=23.22%, 20=76.57%, 50=0.21% 00:37:17.517 cpu : usr=97.16%, sys=2.34%, ctx=12, majf=0, minf=0 00:37:17.517 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:17.517 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:17.517 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:17.517 issued rwts: total=1434,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:17.517 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:17.517 filename0: (groupid=0, jobs=1): err= 0: pid=74828: Wed Apr 17 08:33:50 2024 00:37:17.517 read: IOPS=286, BW=35.8MiB/s (37.5MB/s)(179MiB/5003msec) 00:37:17.517 slat (nsec): min=6173, max=81004, avg=29018.70, stdev=17970.35 00:37:17.517 clat (usec): min=8851, max=28159, avg=10418.99, stdev=1037.87 00:37:17.517 lat (usec): min=8865, max=28213, avg=10448.01, stdev=1040.18 00:37:17.517 clat percentiles (usec): 00:37:17.517 | 1.00th=[ 8979], 5.00th=[ 9110], 10.00th=[ 9503], 20.00th=[ 9896], 00:37:17.517 | 30.00th=[10159], 40.00th=[10290], 50.00th=[10421], 60.00th=[10552], 00:37:17.517 | 70.00th=[10683], 80.00th=[10814], 90.00th=[10945], 95.00th=[11338], 00:37:17.517 | 99.00th=[12649], 99.50th=[12780], 99.90th=[28181], 99.95th=[28181], 00:37:17.517 | 99.99th=[28181] 00:37:17.517 bw ( KiB/s): min=35328, max=37707, per=33.29%, avg=36564.30, stdev=754.31, samples=10 00:37:17.517 iops : min= 276, max= 294, avg=285.60, stdev= 5.80, samples=10 00:37:17.517 lat (msec) : 10=22.43%, 20=77.36%, 50=0.21% 00:37:17.517 cpu : usr=97.02%, sys=2.48%, ctx=4, majf=0, minf=0 00:37:17.517 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:17.517 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:17.517 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:17.517 issued rwts: total=1431,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:17.517 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:17.517 filename0: (groupid=0, jobs=1): err= 0: pid=74829: Wed Apr 17 08:33:50 2024 00:37:17.517 read: IOPS=285, BW=35.7MiB/s (37.5MB/s)(179MiB/5004msec) 00:37:17.517 slat (nsec): min=6395, max=77842, avg=20260.48, stdev=10572.76 00:37:17.517 clat (usec): min=8917, max=28201, avg=10440.45, stdev=1036.65 00:37:17.517 lat (usec): min=8928, max=28232, avg=10460.71, stdev=1037.97 00:37:17.517 clat percentiles (usec): 00:37:17.517 | 1.00th=[ 8979], 5.00th=[ 9110], 10.00th=[ 9503], 20.00th=[ 9896], 00:37:17.517 | 30.00th=[10159], 40.00th=[10290], 50.00th=[10421], 60.00th=[10683], 00:37:17.517 | 70.00th=[10814], 80.00th=[10814], 90.00th=[10945], 95.00th=[11338], 00:37:17.517 | 99.00th=[12649], 99.50th=[12780], 99.90th=[28181], 99.95th=[28181], 00:37:17.517 | 99.99th=[28181] 00:37:17.517 bw ( KiB/s): min=35328, max=37632, per=33.28%, avg=36556.80, stdev=741.96, samples=10 00:37:17.517 iops : min= 276, max= 294, avg=285.60, stdev= 5.80, samples=10 00:37:17.517 lat (msec) : 10=21.73%, 20=78.06%, 50=0.21% 00:37:17.517 cpu : usr=93.84%, sys=5.58%, ctx=51, majf=0, minf=0 00:37:17.517 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:17.517 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:17.517 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:17.517 issued rwts: total=1431,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:17.517 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:17.517 00:37:17.517 Run status group 0 (all jobs): 00:37:17.517 READ: bw=107MiB/s (112MB/s), 35.7MiB/s-35.8MiB/s (37.5MB/s-37.5MB/s), io=537MiB (563MB), run=5003-5006msec 00:37:17.778 08:33:50 -- target/dif.sh@107 -- # destroy_subsystems 0 00:37:17.778 08:33:50 -- target/dif.sh@43 -- # local sub 00:37:17.778 08:33:50 -- target/dif.sh@45 -- # for sub in "$@" 00:37:17.778 08:33:50 -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:17.778 08:33:50 -- target/dif.sh@36 -- # local sub_id=0 00:37:17.778 08:33:50 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:17.778 08:33:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:17.778 08:33:50 -- common/autotest_common.sh@10 -- # set +x 00:37:17.778 08:33:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:17.778 08:33:50 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:17.778 08:33:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:17.778 08:33:50 -- common/autotest_common.sh@10 -- # set +x 00:37:17.778 08:33:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:17.778 08:33:50 -- target/dif.sh@109 -- # NULL_DIF=2 00:37:17.778 08:33:50 -- target/dif.sh@109 -- # bs=4k 00:37:17.778 08:33:50 -- target/dif.sh@109 -- # numjobs=8 00:37:17.778 08:33:50 -- target/dif.sh@109 -- # iodepth=16 00:37:17.778 08:33:50 -- target/dif.sh@109 -- # runtime= 00:37:17.778 08:33:50 -- target/dif.sh@109 -- # files=2 00:37:17.778 08:33:50 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:37:17.778 08:33:50 -- target/dif.sh@28 -- # local sub 00:37:17.778 08:33:50 -- target/dif.sh@30 -- # for sub in "$@" 00:37:17.778 08:33:50 -- target/dif.sh@31 -- # create_subsystem 0 00:37:17.778 08:33:50 -- target/dif.sh@18 -- # local sub_id=0 00:37:17.778 08:33:50 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:37:17.778 08:33:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:17.778 08:33:50 -- common/autotest_common.sh@10 -- # set +x 00:37:17.778 bdev_null0 00:37:17.778 08:33:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:17.778 08:33:50 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:17.778 08:33:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:17.778 08:33:50 -- common/autotest_common.sh@10 -- # set +x 00:37:17.778 08:33:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:17.778 08:33:50 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:17.778 08:33:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:17.778 08:33:50 -- common/autotest_common.sh@10 -- # set +x 00:37:17.778 08:33:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:17.778 08:33:50 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:17.778 08:33:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:17.778 08:33:50 -- common/autotest_common.sh@10 -- # set +x 00:37:17.778 [2024-04-17 08:33:50.945026] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:17.778 08:33:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:17.778 08:33:50 -- target/dif.sh@30 -- # for sub in "$@" 00:37:17.778 08:33:50 -- target/dif.sh@31 -- # create_subsystem 1 00:37:17.778 08:33:50 -- target/dif.sh@18 -- # local sub_id=1 00:37:17.778 08:33:50 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:37:17.778 08:33:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:17.778 08:33:50 -- common/autotest_common.sh@10 -- # set +x 00:37:17.778 bdev_null1 00:37:17.778 08:33:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:17.778 08:33:50 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:17.778 08:33:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:17.778 08:33:50 -- common/autotest_common.sh@10 -- # set +x 00:37:17.778 08:33:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:17.778 08:33:50 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:17.778 08:33:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:17.778 08:33:50 -- common/autotest_common.sh@10 -- # set +x 00:37:17.778 08:33:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:17.778 08:33:50 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:17.778 08:33:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:17.778 08:33:50 -- common/autotest_common.sh@10 -- # set +x 00:37:17.778 08:33:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:17.778 08:33:50 -- target/dif.sh@30 -- # for sub in "$@" 00:37:17.778 08:33:50 -- target/dif.sh@31 -- # create_subsystem 2 00:37:17.778 08:33:50 -- target/dif.sh@18 -- # local sub_id=2 00:37:17.778 08:33:50 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:37:17.778 08:33:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:17.778 08:33:50 -- common/autotest_common.sh@10 -- # set +x 00:37:17.778 bdev_null2 00:37:17.778 08:33:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:17.778 08:33:51 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:37:17.778 08:33:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:17.778 08:33:51 -- common/autotest_common.sh@10 -- # set +x 00:37:17.778 08:33:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:17.778 08:33:51 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:37:17.778 08:33:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:17.778 08:33:51 -- common/autotest_common.sh@10 -- # set +x 00:37:17.778 08:33:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:17.778 08:33:51 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:37:17.778 08:33:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:17.778 08:33:51 -- common/autotest_common.sh@10 -- # set +x 00:37:17.778 08:33:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:17.778 08:33:51 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:37:17.778 08:33:51 -- target/dif.sh@112 -- # fio /dev/fd/62 00:37:17.778 08:33:51 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:37:17.778 08:33:51 -- nvmf/common.sh@520 -- # config=() 00:37:17.778 08:33:51 -- nvmf/common.sh@520 -- # local subsystem config 00:37:17.778 08:33:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:37:17.778 08:33:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:37:17.778 { 00:37:17.778 "params": { 00:37:17.778 "name": "Nvme$subsystem", 00:37:17.778 "trtype": "$TEST_TRANSPORT", 00:37:17.778 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:17.778 "adrfam": "ipv4", 00:37:17.778 "trsvcid": "$NVMF_PORT", 00:37:17.778 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:17.778 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:17.778 "hdgst": ${hdgst:-false}, 00:37:17.778 "ddgst": ${ddgst:-false} 00:37:17.778 }, 00:37:17.778 "method": "bdev_nvme_attach_controller" 00:37:17.778 } 00:37:17.778 EOF 00:37:17.778 )") 00:37:17.779 08:33:51 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:17.779 08:33:51 -- nvmf/common.sh@542 -- # cat 00:37:17.779 08:33:51 -- target/dif.sh@82 -- # gen_fio_conf 00:37:17.779 08:33:51 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:17.779 08:33:51 -- target/dif.sh@54 -- # local file 00:37:17.779 08:33:51 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:37:17.779 08:33:51 -- target/dif.sh@56 -- # cat 00:37:17.779 08:33:51 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:17.779 08:33:51 -- common/autotest_common.sh@1318 -- # local sanitizers 00:37:17.779 08:33:51 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:37:17.779 08:33:51 -- common/autotest_common.sh@1320 -- # shift 00:37:17.779 08:33:51 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:37:17.779 08:33:51 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:37:17.779 08:33:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:37:17.779 08:33:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:37:17.779 { 00:37:17.779 "params": { 00:37:17.779 "name": "Nvme$subsystem", 00:37:17.779 "trtype": "$TEST_TRANSPORT", 00:37:17.779 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:17.779 "adrfam": "ipv4", 00:37:17.779 "trsvcid": "$NVMF_PORT", 00:37:17.779 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:17.779 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:17.779 "hdgst": ${hdgst:-false}, 00:37:17.779 "ddgst": ${ddgst:-false} 00:37:17.779 }, 00:37:17.779 "method": "bdev_nvme_attach_controller" 00:37:17.779 } 00:37:17.779 EOF 00:37:17.779 )") 00:37:17.779 08:33:51 -- target/dif.sh@72 -- # (( file = 1 )) 00:37:17.779 08:33:51 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:37:17.779 08:33:51 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:37:17.779 08:33:51 -- target/dif.sh@72 -- # (( file <= files )) 00:37:17.779 08:33:51 -- common/autotest_common.sh@1324 -- # grep libasan 00:37:17.779 08:33:51 -- target/dif.sh@73 -- # cat 00:37:17.779 08:33:51 -- nvmf/common.sh@542 -- # cat 00:37:17.779 08:33:51 -- target/dif.sh@72 -- # (( file++ )) 00:37:17.779 08:33:51 -- target/dif.sh@72 -- # (( file <= files )) 00:37:17.779 08:33:51 -- target/dif.sh@73 -- # cat 00:37:17.779 08:33:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:37:17.779 08:33:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:37:17.779 { 00:37:17.779 "params": { 00:37:17.779 "name": "Nvme$subsystem", 00:37:17.779 "trtype": "$TEST_TRANSPORT", 00:37:17.779 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:17.779 "adrfam": "ipv4", 00:37:17.779 "trsvcid": "$NVMF_PORT", 00:37:17.779 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:17.779 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:17.779 "hdgst": ${hdgst:-false}, 00:37:17.779 "ddgst": ${ddgst:-false} 00:37:17.779 }, 00:37:17.779 "method": "bdev_nvme_attach_controller" 00:37:17.779 } 00:37:17.779 EOF 00:37:17.779 )") 00:37:17.779 08:33:51 -- nvmf/common.sh@542 -- # cat 00:37:17.779 08:33:51 -- target/dif.sh@72 -- # (( file++ )) 00:37:17.779 08:33:51 -- target/dif.sh@72 -- # (( file <= files )) 00:37:17.779 08:33:51 -- nvmf/common.sh@544 -- # jq . 00:37:17.779 08:33:51 -- nvmf/common.sh@545 -- # IFS=, 00:37:17.779 08:33:51 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:37:17.779 "params": { 00:37:17.779 "name": "Nvme0", 00:37:17.779 "trtype": "tcp", 00:37:17.779 "traddr": "10.0.0.2", 00:37:17.779 "adrfam": "ipv4", 00:37:17.779 "trsvcid": "4420", 00:37:17.779 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:17.779 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:17.779 "hdgst": false, 00:37:17.779 "ddgst": false 00:37:17.779 }, 00:37:17.779 "method": "bdev_nvme_attach_controller" 00:37:17.779 },{ 00:37:17.779 "params": { 00:37:17.779 "name": "Nvme1", 00:37:17.779 "trtype": "tcp", 00:37:17.779 "traddr": "10.0.0.2", 00:37:17.779 "adrfam": "ipv4", 00:37:17.779 "trsvcid": "4420", 00:37:17.779 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:17.779 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:17.779 "hdgst": false, 00:37:17.779 "ddgst": false 00:37:17.779 }, 00:37:17.779 "method": "bdev_nvme_attach_controller" 00:37:17.779 },{ 00:37:17.779 "params": { 00:37:17.779 "name": "Nvme2", 00:37:17.779 "trtype": "tcp", 00:37:17.779 "traddr": "10.0.0.2", 00:37:17.779 "adrfam": "ipv4", 00:37:17.779 "trsvcid": "4420", 00:37:17.779 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:37:17.779 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:37:17.779 "hdgst": false, 00:37:17.779 "ddgst": false 00:37:17.779 }, 00:37:17.779 "method": "bdev_nvme_attach_controller" 00:37:17.779 }' 00:37:17.779 08:33:51 -- common/autotest_common.sh@1324 -- # asan_lib= 00:37:17.779 08:33:51 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:37:17.779 08:33:51 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:37:17.779 08:33:51 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:37:17.779 08:33:51 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:37:17.779 08:33:51 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:37:18.039 08:33:51 -- common/autotest_common.sh@1324 -- # asan_lib= 00:37:18.039 08:33:51 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:37:18.039 08:33:51 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:37:18.039 08:33:51 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:18.039 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:18.039 ... 00:37:18.039 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:18.039 ... 00:37:18.039 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:18.039 ... 00:37:18.039 fio-3.35 00:37:18.039 Starting 24 threads 00:37:18.608 [2024-04-17 08:33:51.843936] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:37:18.608 [2024-04-17 08:33:51.844017] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:37:30.841 00:37:30.841 filename0: (groupid=0, jobs=1): err= 0: pid=74929: Wed Apr 17 08:34:02 2024 00:37:30.841 read: IOPS=184, BW=738KiB/s (755kB/s)(7396KiB/10028msec) 00:37:30.841 slat (usec): min=4, max=8039, avg=21.46, stdev=186.84 00:37:30.841 clat (msec): min=16, max=144, avg=86.61, stdev=20.89 00:37:30.841 lat (msec): min=16, max=144, avg=86.63, stdev=20.88 00:37:30.841 clat percentiles (msec): 00:37:30.841 | 1.00th=[ 31], 5.00th=[ 56], 10.00th=[ 59], 20.00th=[ 68], 00:37:30.841 | 30.00th=[ 77], 40.00th=[ 84], 50.00th=[ 87], 60.00th=[ 94], 00:37:30.841 | 70.00th=[ 96], 80.00th=[ 104], 90.00th=[ 111], 95.00th=[ 122], 00:37:30.841 | 99.00th=[ 142], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 144], 00:37:30.841 | 99.99th=[ 144] 00:37:30.841 bw ( KiB/s): min= 528, max= 1024, per=3.75%, avg=733.20, stdev=106.81, samples=20 00:37:30.841 iops : min= 132, max= 256, avg=183.30, stdev=26.70, samples=20 00:37:30.841 lat (msec) : 20=0.76%, 50=1.35%, 100=73.55%, 250=24.34% 00:37:30.841 cpu : usr=38.94%, sys=0.81%, ctx=1097, majf=0, minf=9 00:37:30.841 IO depths : 1=0.1%, 2=2.5%, 4=10.7%, 8=71.5%, 16=15.3%, 32=0.0%, >=64=0.0% 00:37:30.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:30.841 complete : 0=0.0%, 4=90.6%, 8=7.0%, 16=2.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:30.841 issued rwts: total=1849,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:30.841 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:30.841 filename0: (groupid=0, jobs=1): err= 0: pid=74930: Wed Apr 17 08:34:02 2024 00:37:30.841 read: IOPS=217, BW=869KiB/s (890kB/s)(8688KiB/10001msec) 00:37:30.841 slat (usec): min=2, max=5032, avg=36.85, stdev=246.03 00:37:30.841 clat (msec): min=7, max=114, avg=73.49, stdev=20.18 00:37:30.841 lat (msec): min=7, max=114, avg=73.53, stdev=20.18 00:37:30.841 clat percentiles (msec): 00:37:30.841 | 1.00th=[ 31], 5.00th=[ 41], 10.00th=[ 49], 20.00th=[ 58], 00:37:30.841 | 30.00th=[ 61], 40.00th=[ 67], 50.00th=[ 71], 60.00th=[ 79], 00:37:30.841 | 70.00th=[ 88], 80.00th=[ 94], 90.00th=[ 101], 95.00th=[ 106], 00:37:30.841 | 99.00th=[ 112], 99.50th=[ 114], 99.90th=[ 114], 99.95th=[ 115], 00:37:30.841 | 99.99th=[ 115] 00:37:30.841 bw ( KiB/s): min= 768, max= 1104, per=4.42%, avg=863.16, stdev=102.34, samples=19 00:37:30.841 iops : min= 192, max= 276, avg=215.79, stdev=25.59, samples=19 00:37:30.841 lat (msec) : 10=0.18%, 20=0.28%, 50=10.41%, 100=78.91%, 250=10.22% 00:37:30.841 cpu : usr=44.46%, sys=0.97%, ctx=1390, majf=0, minf=9 00:37:30.841 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.6%, 16=15.7%, 32=0.0%, >=64=0.0% 00:37:30.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:30.841 complete : 0=0.0%, 4=86.8%, 8=13.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:30.841 issued rwts: total=2172,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:30.841 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:30.841 filename0: (groupid=0, jobs=1): err= 0: pid=74931: Wed Apr 17 08:34:02 2024 00:37:30.841 read: IOPS=214, BW=858KiB/s (879kB/s)(8596KiB/10013msec) 00:37:30.841 slat (usec): min=3, max=8028, avg=37.98, stdev=287.67 00:37:30.841 clat (msec): min=18, max=118, avg=74.37, stdev=19.75 00:37:30.841 lat (msec): min=18, max=118, avg=74.41, stdev=19.74 00:37:30.841 clat percentiles (msec): 00:37:30.841 | 1.00th=[ 31], 5.00th=[ 45], 10.00th=[ 50], 20.00th=[ 59], 00:37:30.841 | 30.00th=[ 62], 40.00th=[ 66], 50.00th=[ 72], 60.00th=[ 81], 00:37:30.841 | 70.00th=[ 89], 80.00th=[ 95], 90.00th=[ 101], 95.00th=[ 106], 00:37:30.841 | 99.00th=[ 111], 99.50th=[ 114], 99.90th=[ 118], 99.95th=[ 118], 00:37:30.841 | 99.99th=[ 118] 00:37:30.841 bw ( KiB/s): min= 768, max= 1072, per=4.36%, avg=853.30, stdev=99.47, samples=20 00:37:30.841 iops : min= 192, max= 268, avg=213.30, stdev=24.89, samples=20 00:37:30.841 lat (msec) : 20=0.33%, 50=10.24%, 100=79.01%, 250=10.42% 00:37:30.841 cpu : usr=43.27%, sys=1.14%, ctx=1226, majf=0, minf=9 00:37:30.841 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.4%, 16=15.9%, 32=0.0%, >=64=0.0% 00:37:30.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:30.841 complete : 0=0.0%, 4=87.0%, 8=12.9%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:30.841 issued rwts: total=2149,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:30.841 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:30.841 filename0: (groupid=0, jobs=1): err= 0: pid=74932: Wed Apr 17 08:34:02 2024 00:37:30.841 read: IOPS=204, BW=816KiB/s (836kB/s)(8176KiB/10017msec) 00:37:30.841 slat (usec): min=3, max=8047, avg=33.91, stdev=294.16 00:37:30.841 clat (msec): min=18, max=143, avg=78.27, stdev=19.23 00:37:30.841 lat (msec): min=18, max=143, avg=78.31, stdev=19.23 00:37:30.841 clat percentiles (msec): 00:37:30.841 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 56], 20.00th=[ 61], 00:37:30.841 | 30.00th=[ 64], 40.00th=[ 71], 50.00th=[ 81], 60.00th=[ 85], 00:37:30.841 | 70.00th=[ 94], 80.00th=[ 96], 90.00th=[ 104], 95.00th=[ 107], 00:37:30.841 | 99.00th=[ 113], 99.50th=[ 120], 99.90th=[ 132], 99.95th=[ 133], 00:37:30.841 | 99.99th=[ 144] 00:37:30.841 bw ( KiB/s): min= 736, max= 1072, per=4.15%, avg=811.20, stdev=86.84, samples=20 00:37:30.841 iops : min= 184, max= 268, avg=202.80, stdev=21.71, samples=20 00:37:30.841 lat (msec) : 20=0.34%, 50=6.31%, 100=79.70%, 250=13.65% 00:37:30.841 cpu : usr=36.15%, sys=0.79%, ctx=982, majf=0, minf=9 00:37:30.841 IO depths : 1=0.1%, 2=0.8%, 4=3.3%, 8=80.0%, 16=15.9%, 32=0.0%, >=64=0.0% 00:37:30.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:30.841 complete : 0=0.0%, 4=88.1%, 8=11.1%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:30.841 issued rwts: total=2044,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:30.841 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:30.841 filename0: (groupid=0, jobs=1): err= 0: pid=74933: Wed Apr 17 08:34:02 2024 00:37:30.841 read: IOPS=212, BW=850KiB/s (871kB/s)(8504KiB/10002msec) 00:37:30.841 slat (usec): min=5, max=4045, avg=24.32, stdev=123.80 00:37:30.841 clat (msec): min=2, max=127, avg=75.16, stdev=20.48 00:37:30.841 lat (msec): min=2, max=127, avg=75.19, stdev=20.48 00:37:30.841 clat percentiles (msec): 00:37:30.841 | 1.00th=[ 4], 5.00th=[ 46], 10.00th=[ 54], 20.00th=[ 60], 00:37:30.841 | 30.00th=[ 62], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 84], 00:37:30.841 | 70.00th=[ 90], 80.00th=[ 95], 90.00th=[ 103], 95.00th=[ 106], 00:37:30.841 | 99.00th=[ 111], 99.50th=[ 113], 99.90th=[ 115], 99.95th=[ 128], 00:37:30.841 | 99.99th=[ 128] 00:37:30.841 bw ( KiB/s): min= 768, max= 1040, per=4.26%, avg=833.26, stdev=69.96, samples=19 00:37:30.841 iops : min= 192, max= 260, avg=208.32, stdev=17.49, samples=19 00:37:30.841 lat (msec) : 4=1.03%, 10=0.33%, 20=0.28%, 50=6.77%, 100=79.92% 00:37:30.841 lat (msec) : 250=11.67% 00:37:30.841 cpu : usr=38.28%, sys=1.03%, ctx=1101, majf=0, minf=9 00:37:30.841 IO depths : 1=0.1%, 2=0.8%, 4=3.2%, 8=80.5%, 16=15.4%, 32=0.0%, >=64=0.0% 00:37:30.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:30.841 complete : 0=0.0%, 4=87.7%, 8=11.6%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:30.841 issued rwts: total=2126,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:30.841 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:30.841 filename0: (groupid=0, jobs=1): err= 0: pid=74934: Wed Apr 17 08:34:02 2024 00:37:30.841 read: IOPS=190, BW=763KiB/s (781kB/s)(7648KiB/10023msec) 00:37:30.841 slat (usec): min=7, max=8037, avg=34.76, stdev=329.78 00:37:30.841 clat (msec): min=30, max=151, avg=83.66, stdev=18.48 00:37:30.841 lat (msec): min=30, max=151, avg=83.70, stdev=18.48 00:37:30.842 clat percentiles (msec): 00:37:30.842 | 1.00th=[ 49], 5.00th=[ 57], 10.00th=[ 59], 20.00th=[ 63], 00:37:30.842 | 30.00th=[ 71], 40.00th=[ 82], 50.00th=[ 86], 60.00th=[ 92], 00:37:30.842 | 70.00th=[ 96], 80.00th=[ 99], 90.00th=[ 107], 95.00th=[ 110], 00:37:30.842 | 99.00th=[ 128], 99.50th=[ 129], 99.90th=[ 153], 99.95th=[ 153], 00:37:30.842 | 99.99th=[ 153] 00:37:30.842 bw ( KiB/s): min= 544, max= 896, per=3.88%, avg=758.30, stdev=85.59, samples=20 00:37:30.842 iops : min= 136, max= 224, avg=189.55, stdev=21.43, samples=20 00:37:30.842 lat (msec) : 50=1.94%, 100=79.92%, 250=18.15% 00:37:30.842 cpu : usr=36.65%, sys=0.74%, ctx=951, majf=0, minf=9 00:37:30.842 IO depths : 1=0.1%, 2=2.5%, 4=9.8%, 8=72.8%, 16=14.8%, 32=0.0%, >=64=0.0% 00:37:30.842 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:30.842 complete : 0=0.0%, 4=89.9%, 8=7.9%, 16=2.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:30.842 issued rwts: total=1912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:30.842 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:30.842 filename0: (groupid=0, jobs=1): err= 0: pid=74935: Wed Apr 17 08:34:02 2024 00:37:30.842 read: IOPS=197, BW=789KiB/s (808kB/s)(7916KiB/10031msec) 00:37:30.842 slat (usec): min=6, max=8014, avg=29.09, stdev=257.16 00:37:30.842 clat (msec): min=23, max=135, avg=80.88, stdev=19.77 00:37:30.842 lat (msec): min=23, max=135, avg=80.91, stdev=19.77 00:37:30.842 clat percentiles (msec): 00:37:30.842 | 1.00th=[ 35], 5.00th=[ 48], 10.00th=[ 57], 20.00th=[ 61], 00:37:30.842 | 30.00th=[ 68], 40.00th=[ 78], 50.00th=[ 85], 60.00th=[ 91], 00:37:30.842 | 70.00th=[ 96], 80.00th=[ 99], 90.00th=[ 104], 95.00th=[ 108], 00:37:30.842 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 136], 99.95th=[ 136], 00:37:30.842 | 99.99th=[ 136] 00:37:30.842 bw ( KiB/s): min= 680, max= 1048, per=4.03%, avg=788.00, stdev=101.97, samples=20 00:37:30.842 iops : min= 170, max= 262, avg=197.00, stdev=25.49, samples=20 00:37:30.842 lat (msec) : 50=6.06%, 100=78.02%, 250=15.92% 00:37:30.842 cpu : usr=40.04%, sys=0.84%, ctx=1291, majf=0, minf=9 00:37:30.842 IO depths : 1=0.1%, 2=0.5%, 4=2.1%, 8=80.5%, 16=16.8%, 32=0.0%, >=64=0.0% 00:37:30.842 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:30.842 complete : 0=0.0%, 4=88.5%, 8=11.1%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:30.842 issued rwts: total=1979,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:30.842 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:30.842 filename0: (groupid=0, jobs=1): err= 0: pid=74936: Wed Apr 17 08:34:02 2024 00:37:30.842 read: IOPS=202, BW=810KiB/s (829kB/s)(8120KiB/10027msec) 00:37:30.842 slat (usec): min=7, max=10046, avg=40.70, stdev=358.46 00:37:30.842 clat (msec): min=24, max=127, avg=78.78, stdev=19.34 00:37:30.842 lat (msec): min=24, max=127, avg=78.82, stdev=19.34 00:37:30.842 clat percentiles (msec): 00:37:30.842 | 1.00th=[ 35], 5.00th=[ 47], 10.00th=[ 56], 20.00th=[ 61], 00:37:30.842 | 30.00th=[ 66], 40.00th=[ 73], 50.00th=[ 82], 60.00th=[ 89], 00:37:30.842 | 70.00th=[ 94], 80.00th=[ 97], 90.00th=[ 103], 95.00th=[ 107], 00:37:30.842 | 99.00th=[ 113], 99.50th=[ 114], 99.90th=[ 123], 99.95th=[ 125], 00:37:30.842 | 99.99th=[ 128] 00:37:30.842 bw ( KiB/s): min= 712, max= 1064, per=4.13%, avg=807.80, stdev=107.09, samples=20 00:37:30.842 iops : min= 178, max= 266, avg=201.90, stdev=26.77, samples=20 00:37:30.842 lat (msec) : 50=7.49%, 100=79.01%, 250=13.50% 00:37:30.842 cpu : usr=44.91%, sys=0.93%, ctx=1352, majf=0, minf=9 00:37:30.842 IO depths : 1=0.1%, 2=0.8%, 4=3.3%, 8=79.6%, 16=16.3%, 32=0.0%, >=64=0.0% 00:37:30.842 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:30.842 complete : 0=0.0%, 4=88.5%, 8=10.8%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:30.842 issued rwts: total=2030,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:30.842 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:30.842 filename1: (groupid=0, jobs=1): err= 0: pid=74937: Wed Apr 17 08:34:02 2024 00:37:30.842 read: IOPS=207, BW=829KiB/s (849kB/s)(8308KiB/10016msec) 00:37:30.842 slat (usec): min=6, max=8023, avg=34.38, stdev=298.50 00:37:30.842 clat (msec): min=21, max=119, avg=77.00, stdev=19.52 00:37:30.842 lat (msec): min=21, max=120, avg=77.03, stdev=19.51 00:37:30.842 clat percentiles (msec): 00:37:30.842 | 1.00th=[ 31], 5.00th=[ 48], 10.00th=[ 56], 20.00th=[ 61], 00:37:30.842 | 30.00th=[ 63], 40.00th=[ 70], 50.00th=[ 74], 60.00th=[ 85], 00:37:30.842 | 70.00th=[ 92], 80.00th=[ 96], 90.00th=[ 104], 95.00th=[ 108], 00:37:30.842 | 99.00th=[ 115], 99.50th=[ 121], 99.90th=[ 121], 99.95th=[ 121], 00:37:30.842 | 99.99th=[ 121] 00:37:30.842 bw ( KiB/s): min= 744, max= 1048, per=4.22%, avg=824.45, stdev=90.46, samples=20 00:37:30.842 iops : min= 186, max= 262, avg=206.10, stdev=22.62, samples=20 00:37:30.842 lat (msec) : 50=7.56%, 100=79.30%, 250=13.14% 00:37:30.842 cpu : usr=38.94%, sys=0.84%, ctx=1148, majf=0, minf=9 00:37:30.842 IO depths : 1=0.1%, 2=0.5%, 4=1.9%, 8=81.7%, 16=15.9%, 32=0.0%, >=64=0.0% 00:37:30.842 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:30.842 complete : 0=0.0%, 4=87.6%, 8=12.0%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:30.842 issued rwts: total=2077,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:30.842 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:30.842 filename1: (groupid=0, jobs=1): err= 0: pid=74938: Wed Apr 17 08:34:02 2024 00:37:30.842 read: IOPS=206, BW=827KiB/s (847kB/s)(8288KiB/10020msec) 00:37:30.842 slat (usec): min=3, max=8065, avg=29.56, stdev=219.38 00:37:30.842 clat (msec): min=20, max=131, avg=77.20, stdev=19.27 00:37:30.842 lat (msec): min=20, max=132, avg=77.22, stdev=19.27 00:37:30.842 clat percentiles (msec): 00:37:30.842 | 1.00th=[ 35], 5.00th=[ 47], 10.00th=[ 55], 20.00th=[ 61], 00:37:30.842 | 30.00th=[ 63], 40.00th=[ 71], 50.00th=[ 80], 60.00th=[ 85], 00:37:30.842 | 70.00th=[ 93], 80.00th=[ 96], 90.00th=[ 104], 95.00th=[ 107], 00:37:30.842 | 99.00th=[ 110], 99.50th=[ 114], 99.90th=[ 121], 99.95th=[ 129], 00:37:30.842 | 99.99th=[ 132] 00:37:30.842 bw ( KiB/s): min= 720, max= 1096, per=4.22%, avg=825.25, stdev=102.73, samples=20 00:37:30.842 iops : min= 180, max= 274, avg=206.30, stdev=25.69, samples=20 00:37:30.842 lat (msec) : 50=8.11%, 100=80.31%, 250=11.58% 00:37:30.842 cpu : usr=33.86%, sys=0.82%, ctx=921, majf=0, minf=9 00:37:30.842 IO depths : 1=0.1%, 2=0.4%, 4=1.7%, 8=81.7%, 16=16.2%, 32=0.0%, >=64=0.0% 00:37:30.842 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:30.842 complete : 0=0.0%, 4=87.7%, 8=11.9%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:30.842 issued rwts: total=2072,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:30.842 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:30.842 filename1: (groupid=0, jobs=1): err= 0: pid=74939: Wed Apr 17 08:34:02 2024 00:37:30.842 read: IOPS=197, BW=790KiB/s (809kB/s)(7940KiB/10050msec) 00:37:30.842 slat (usec): min=6, max=8020, avg=26.76, stdev=220.26 00:37:30.842 clat (msec): min=5, max=145, avg=80.82, stdev=23.61 00:37:30.842 lat (msec): min=5, max=145, avg=80.84, stdev=23.61 00:37:30.842 clat percentiles (msec): 00:37:30.842 | 1.00th=[ 7], 5.00th=[ 35], 10.00th=[ 50], 20.00th=[ 61], 00:37:30.842 | 30.00th=[ 71], 40.00th=[ 83], 50.00th=[ 87], 60.00th=[ 94], 00:37:30.842 | 70.00th=[ 96], 80.00th=[ 99], 90.00th=[ 107], 95.00th=[ 109], 00:37:30.842 | 99.00th=[ 121], 99.50th=[ 124], 99.90th=[ 133], 99.95th=[ 146], 00:37:30.842 | 99.99th=[ 146] 00:37:30.842 bw ( KiB/s): min= 664, max= 1408, per=4.03%, avg=787.35, stdev=181.65, samples=20 00:37:30.842 iops : min= 166, max= 352, avg=196.80, stdev=45.41, samples=20 00:37:30.842 lat (msec) : 10=2.42%, 20=0.81%, 50=6.95%, 100=74.66%, 250=15.16% 00:37:30.842 cpu : usr=34.28%, sys=0.70%, ctx=915, majf=0, minf=0 00:37:30.842 IO depths : 1=0.1%, 2=0.9%, 4=3.2%, 8=78.8%, 16=17.1%, 32=0.0%, >=64=0.0% 00:37:30.842 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:30.842 complete : 0=0.0%, 4=89.1%, 8=10.2%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:30.842 issued rwts: total=1985,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:30.842 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:30.842 filename1: (groupid=0, jobs=1): err= 0: pid=74940: Wed Apr 17 08:34:02 2024 00:37:30.842 read: IOPS=214, BW=858KiB/s (879kB/s)(8596KiB/10013msec) 00:37:30.842 slat (usec): min=3, max=8079, avg=52.88, stdev=488.85 00:37:30.842 clat (msec): min=15, max=119, avg=74.31, stdev=20.14 00:37:30.842 lat (msec): min=15, max=119, avg=74.36, stdev=20.14 00:37:30.842 clat percentiles (msec): 00:37:30.842 | 1.00th=[ 28], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 59], 00:37:30.842 | 30.00th=[ 61], 40.00th=[ 67], 50.00th=[ 72], 60.00th=[ 84], 00:37:30.842 | 70.00th=[ 89], 80.00th=[ 95], 90.00th=[ 102], 95.00th=[ 107], 00:37:30.842 | 99.00th=[ 111], 99.50th=[ 117], 99.90th=[ 120], 99.95th=[ 120], 00:37:30.842 | 99.99th=[ 120] 00:37:30.842 bw ( KiB/s): min= 760, max= 1120, per=4.36%, avg=853.35, stdev=111.08, samples=20 00:37:30.842 iops : min= 190, max= 280, avg=213.30, stdev=27.78, samples=20 00:37:30.842 lat (msec) : 20=0.28%, 50=11.35%, 100=78.18%, 250=10.19% 00:37:30.842 cpu : usr=35.05%, sys=0.61%, ctx=956, majf=0, minf=9 00:37:30.842 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.4%, 16=15.9%, 32=0.0%, >=64=0.0% 00:37:30.842 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:30.842 complete : 0=0.0%, 4=87.0%, 8=12.9%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:30.842 issued rwts: total=2149,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:30.842 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:30.842 filename1: (groupid=0, jobs=1): err= 0: pid=74941: Wed Apr 17 08:34:02 2024 00:37:30.842 read: IOPS=189, BW=758KiB/s (777kB/s)(7620KiB/10047msec) 00:37:30.842 slat (nsec): min=5934, max=74075, avg=18695.39, stdev=10014.41 00:37:30.842 clat (msec): min=5, max=143, avg=84.22, stdev=22.68 00:37:30.842 lat (msec): min=5, max=143, avg=84.24, stdev=22.68 00:37:30.842 clat percentiles (msec): 00:37:30.842 | 1.00th=[ 8], 5.00th=[ 51], 10.00th=[ 56], 20.00th=[ 64], 00:37:30.842 | 30.00th=[ 77], 40.00th=[ 85], 50.00th=[ 88], 60.00th=[ 95], 00:37:30.842 | 70.00th=[ 96], 80.00th=[ 100], 90.00th=[ 108], 95.00th=[ 113], 00:37:30.842 | 99.00th=[ 130], 99.50th=[ 134], 99.90th=[ 142], 99.95th=[ 144], 00:37:30.842 | 99.99th=[ 144] 00:37:30.842 bw ( KiB/s): min= 584, max= 1280, per=3.86%, avg=755.30, stdev=155.19, samples=20 00:37:30.842 iops : min= 146, max= 320, avg=188.80, stdev=38.81, samples=20 00:37:30.842 lat (msec) : 10=2.41%, 20=0.10%, 50=2.31%, 100=77.43%, 250=17.74% 00:37:30.842 cpu : usr=35.58%, sys=0.66%, ctx=957, majf=0, minf=9 00:37:30.842 IO depths : 1=0.1%, 2=1.0%, 4=4.3%, 8=77.4%, 16=17.3%, 32=0.0%, >=64=0.0% 00:37:30.842 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:30.842 complete : 0=0.0%, 4=89.7%, 8=9.3%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:30.842 issued rwts: total=1905,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:30.842 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:30.843 filename1: (groupid=0, jobs=1): err= 0: pid=74942: Wed Apr 17 08:34:02 2024 00:37:30.843 read: IOPS=209, BW=837KiB/s (857kB/s)(8384KiB/10020msec) 00:37:30.843 slat (usec): min=3, max=7013, avg=31.61, stdev=254.06 00:37:30.843 clat (msec): min=26, max=128, avg=76.27, stdev=18.71 00:37:30.843 lat (msec): min=26, max=128, avg=76.30, stdev=18.71 00:37:30.843 clat percentiles (msec): 00:37:30.843 | 1.00th=[ 34], 5.00th=[ 48], 10.00th=[ 55], 20.00th=[ 60], 00:37:30.843 | 30.00th=[ 63], 40.00th=[ 69], 50.00th=[ 74], 60.00th=[ 85], 00:37:30.843 | 70.00th=[ 91], 80.00th=[ 96], 90.00th=[ 102], 95.00th=[ 106], 00:37:30.843 | 99.00th=[ 110], 99.50th=[ 111], 99.90th=[ 120], 99.95th=[ 123], 00:37:30.843 | 99.99th=[ 129] 00:37:30.843 bw ( KiB/s): min= 736, max= 1096, per=4.27%, avg=834.85, stdev=87.10, samples=20 00:37:30.843 iops : min= 184, max= 274, avg=208.70, stdev=21.78, samples=20 00:37:30.843 lat (msec) : 50=6.01%, 100=83.30%, 250=10.69% 00:37:30.843 cpu : usr=41.54%, sys=1.03%, ctx=1486, majf=0, minf=9 00:37:30.843 IO depths : 1=0.1%, 2=0.9%, 4=3.4%, 8=80.2%, 16=15.5%, 32=0.0%, >=64=0.0% 00:37:30.843 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:30.843 complete : 0=0.0%, 4=87.8%, 8=11.4%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:30.843 issued rwts: total=2096,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:30.843 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:30.843 filename1: (groupid=0, jobs=1): err= 0: pid=74943: Wed Apr 17 08:34:02 2024 00:37:30.843 read: IOPS=203, BW=816KiB/s (836kB/s)(8196KiB/10045msec) 00:37:30.843 slat (usec): min=5, max=8114, avg=34.81, stdev=332.24 00:37:30.843 clat (msec): min=10, max=139, avg=78.23, stdev=21.25 00:37:30.843 lat (msec): min=10, max=139, avg=78.26, stdev=21.25 00:37:30.843 clat percentiles (msec): 00:37:30.843 | 1.00th=[ 14], 5.00th=[ 45], 10.00th=[ 52], 20.00th=[ 61], 00:37:30.843 | 30.00th=[ 66], 40.00th=[ 72], 50.00th=[ 83], 60.00th=[ 88], 00:37:30.843 | 70.00th=[ 94], 80.00th=[ 96], 90.00th=[ 104], 95.00th=[ 108], 00:37:30.843 | 99.00th=[ 114], 99.50th=[ 124], 99.90th=[ 136], 99.95th=[ 138], 00:37:30.843 | 99.99th=[ 140] 00:37:30.843 bw ( KiB/s): min= 688, max= 1240, per=4.15%, avg=812.85, stdev=136.94, samples=20 00:37:30.843 iops : min= 172, max= 310, avg=203.20, stdev=34.24, samples=20 00:37:30.843 lat (msec) : 20=1.56%, 50=7.27%, 100=77.21%, 250=13.96% 00:37:30.843 cpu : usr=40.55%, sys=1.25%, ctx=1214, majf=0, minf=9 00:37:30.843 IO depths : 1=0.1%, 2=0.4%, 4=1.7%, 8=81.2%, 16=16.7%, 32=0.0%, >=64=0.0% 00:37:30.843 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:30.843 complete : 0=0.0%, 4=88.1%, 8=11.5%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:30.843 issued rwts: total=2049,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:30.843 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:30.843 filename1: (groupid=0, jobs=1): err= 0: pid=74944: Wed Apr 17 08:34:02 2024 00:37:30.843 read: IOPS=198, BW=795KiB/s (814kB/s)(7972KiB/10025msec) 00:37:30.843 slat (usec): min=7, max=7028, avg=25.72, stdev=200.78 00:37:30.843 clat (msec): min=30, max=135, avg=80.29, stdev=19.65 00:37:30.843 lat (msec): min=30, max=135, avg=80.31, stdev=19.65 00:37:30.843 clat percentiles (msec): 00:37:30.843 | 1.00th=[ 34], 5.00th=[ 47], 10.00th=[ 57], 20.00th=[ 61], 00:37:30.843 | 30.00th=[ 68], 40.00th=[ 74], 50.00th=[ 84], 60.00th=[ 90], 00:37:30.843 | 70.00th=[ 94], 80.00th=[ 100], 90.00th=[ 105], 95.00th=[ 107], 00:37:30.843 | 99.00th=[ 120], 99.50th=[ 125], 99.90th=[ 136], 99.95th=[ 136], 00:37:30.843 | 99.99th=[ 136] 00:37:30.843 bw ( KiB/s): min= 664, max= 1056, per=4.06%, avg=793.50, stdev=102.26, samples=20 00:37:30.843 iops : min= 166, max= 264, avg=198.35, stdev=25.57, samples=20 00:37:30.843 lat (msec) : 50=6.57%, 100=74.46%, 250=18.97% 00:37:30.843 cpu : usr=35.57%, sys=0.84%, ctx=1405, majf=0, minf=9 00:37:30.843 IO depths : 1=0.1%, 2=0.6%, 4=2.2%, 8=80.6%, 16=16.6%, 32=0.0%, >=64=0.0% 00:37:30.843 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:30.843 complete : 0=0.0%, 4=88.2%, 8=11.3%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:30.843 issued rwts: total=1993,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:30.843 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:30.843 filename2: (groupid=0, jobs=1): err= 0: pid=74945: Wed Apr 17 08:34:02 2024 00:37:30.843 read: IOPS=204, BW=817KiB/s (837kB/s)(8176KiB/10006msec) 00:37:30.843 slat (usec): min=4, max=8034, avg=40.80, stdev=353.70 00:37:30.843 clat (msec): min=7, max=134, avg=78.17, stdev=19.86 00:37:30.843 lat (msec): min=7, max=134, avg=78.21, stdev=19.86 00:37:30.843 clat percentiles (msec): 00:37:30.843 | 1.00th=[ 28], 5.00th=[ 47], 10.00th=[ 56], 20.00th=[ 61], 00:37:30.843 | 30.00th=[ 64], 40.00th=[ 72], 50.00th=[ 82], 60.00th=[ 86], 00:37:30.843 | 70.00th=[ 94], 80.00th=[ 96], 90.00th=[ 104], 95.00th=[ 108], 00:37:30.843 | 99.00th=[ 113], 99.50th=[ 121], 99.90th=[ 133], 99.95th=[ 134], 00:37:30.843 | 99.99th=[ 134] 00:37:30.843 bw ( KiB/s): min= 688, max= 1032, per=4.14%, avg=810.63, stdev=95.67, samples=19 00:37:30.843 iops : min= 172, max= 258, avg=202.63, stdev=23.89, samples=19 00:37:30.843 lat (msec) : 10=0.15%, 50=8.12%, 100=77.84%, 250=13.89% 00:37:30.843 cpu : usr=40.98%, sys=0.85%, ctx=1205, majf=0, minf=0 00:37:30.843 IO depths : 1=0.1%, 2=0.4%, 4=1.7%, 8=81.3%, 16=16.5%, 32=0.0%, >=64=0.0% 00:37:30.843 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:30.843 complete : 0=0.0%, 4=88.0%, 8=11.7%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:30.843 issued rwts: total=2044,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:30.843 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:30.843 filename2: (groupid=0, jobs=1): err= 0: pid=74946: Wed Apr 17 08:34:02 2024 00:37:30.843 read: IOPS=202, BW=811KiB/s (830kB/s)(8144KiB/10047msec) 00:37:30.843 slat (usec): min=4, max=6042, avg=24.47, stdev=162.57 00:37:30.843 clat (msec): min=4, max=141, avg=78.75, stdev=23.58 00:37:30.843 lat (msec): min=4, max=141, avg=78.78, stdev=23.58 00:37:30.843 clat percentiles (msec): 00:37:30.843 | 1.00th=[ 6], 5.00th=[ 41], 10.00th=[ 54], 20.00th=[ 61], 00:37:30.843 | 30.00th=[ 66], 40.00th=[ 73], 50.00th=[ 83], 60.00th=[ 89], 00:37:30.843 | 70.00th=[ 95], 80.00th=[ 97], 90.00th=[ 106], 95.00th=[ 109], 00:37:30.843 | 99.00th=[ 127], 99.50th=[ 134], 99.90th=[ 134], 99.95th=[ 136], 00:37:30.843 | 99.99th=[ 142] 00:37:30.843 bw ( KiB/s): min= 656, max= 1520, per=4.13%, avg=808.00, stdev=187.89, samples=20 00:37:30.843 iops : min= 164, max= 380, avg=202.00, stdev=46.97, samples=20 00:37:30.843 lat (msec) : 10=3.05%, 20=0.10%, 50=4.81%, 100=76.67%, 250=15.37% 00:37:30.843 cpu : usr=44.02%, sys=0.86%, ctx=1434, majf=0, minf=9 00:37:30.843 IO depths : 1=0.1%, 2=1.1%, 4=4.4%, 8=78.1%, 16=16.2%, 32=0.0%, >=64=0.0% 00:37:30.843 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:30.843 complete : 0=0.0%, 4=88.8%, 8=10.2%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:30.843 issued rwts: total=2036,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:30.843 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:30.843 filename2: (groupid=0, jobs=1): err= 0: pid=74947: Wed Apr 17 08:34:02 2024 00:37:30.843 read: IOPS=212, BW=851KiB/s (872kB/s)(8532KiB/10023msec) 00:37:30.843 slat (usec): min=3, max=7452, avg=42.04, stdev=333.72 00:37:30.843 clat (msec): min=25, max=144, avg=74.95, stdev=20.13 00:37:30.843 lat (msec): min=26, max=144, avg=74.99, stdev=20.12 00:37:30.843 clat percentiles (msec): 00:37:30.843 | 1.00th=[ 32], 5.00th=[ 43], 10.00th=[ 49], 20.00th=[ 58], 00:37:30.843 | 30.00th=[ 62], 40.00th=[ 68], 50.00th=[ 73], 60.00th=[ 84], 00:37:30.843 | 70.00th=[ 90], 80.00th=[ 95], 90.00th=[ 102], 95.00th=[ 107], 00:37:30.843 | 99.00th=[ 113], 99.50th=[ 114], 99.90th=[ 124], 99.95th=[ 136], 00:37:30.843 | 99.99th=[ 144] 00:37:30.843 bw ( KiB/s): min= 736, max= 1128, per=4.34%, avg=849.25, stdev=116.52, samples=20 00:37:30.843 iops : min= 184, max= 282, avg=212.30, stdev=29.14, samples=20 00:37:30.843 lat (msec) : 50=11.53%, 100=76.70%, 250=11.77% 00:37:30.843 cpu : usr=39.83%, sys=0.85%, ctx=1322, majf=0, minf=9 00:37:30.843 IO depths : 1=0.1%, 2=0.1%, 4=0.7%, 8=83.2%, 16=15.9%, 32=0.0%, >=64=0.0% 00:37:30.843 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:30.843 complete : 0=0.0%, 4=87.1%, 8=12.8%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:30.843 issued rwts: total=2133,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:30.843 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:30.843 filename2: (groupid=0, jobs=1): err= 0: pid=74948: Wed Apr 17 08:34:02 2024 00:37:30.843 read: IOPS=208, BW=834KiB/s (854kB/s)(8352KiB/10011msec) 00:37:30.843 slat (usec): min=7, max=8075, avg=37.25, stdev=328.08 00:37:30.843 clat (msec): min=18, max=132, avg=76.57, stdev=19.97 00:37:30.843 lat (msec): min=18, max=132, avg=76.60, stdev=19.96 00:37:30.843 clat percentiles (msec): 00:37:30.843 | 1.00th=[ 31], 5.00th=[ 45], 10.00th=[ 52], 20.00th=[ 59], 00:37:30.843 | 30.00th=[ 63], 40.00th=[ 71], 50.00th=[ 75], 60.00th=[ 84], 00:37:30.843 | 70.00th=[ 92], 80.00th=[ 96], 90.00th=[ 104], 95.00th=[ 108], 00:37:30.843 | 99.00th=[ 113], 99.50th=[ 115], 99.90th=[ 131], 99.95th=[ 132], 00:37:30.843 | 99.99th=[ 133] 00:37:30.843 bw ( KiB/s): min= 688, max= 1096, per=4.24%, avg=829.90, stdev=105.59, samples=20 00:37:30.843 iops : min= 172, max= 274, avg=207.45, stdev=26.37, samples=20 00:37:30.843 lat (msec) : 20=0.34%, 50=7.95%, 100=78.26%, 250=13.46% 00:37:30.843 cpu : usr=34.01%, sys=0.88%, ctx=920, majf=0, minf=9 00:37:30.843 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=82.9%, 16=16.4%, 32=0.0%, >=64=0.0% 00:37:30.843 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:30.843 complete : 0=0.0%, 4=87.5%, 8=12.4%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:30.843 issued rwts: total=2088,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:30.843 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:30.843 filename2: (groupid=0, jobs=1): err= 0: pid=74949: Wed Apr 17 08:34:02 2024 00:37:30.843 read: IOPS=200, BW=801KiB/s (820kB/s)(8048KiB/10051msec) 00:37:30.843 slat (usec): min=6, max=4030, avg=24.69, stdev=152.50 00:37:30.843 clat (msec): min=2, max=142, avg=79.76, stdev=24.32 00:37:30.843 lat (msec): min=2, max=143, avg=79.78, stdev=24.32 00:37:30.843 clat percentiles (msec): 00:37:30.843 | 1.00th=[ 4], 5.00th=[ 31], 10.00th=[ 50], 20.00th=[ 62], 00:37:30.843 | 30.00th=[ 71], 40.00th=[ 82], 50.00th=[ 87], 60.00th=[ 94], 00:37:30.843 | 70.00th=[ 96], 80.00th=[ 96], 90.00th=[ 106], 95.00th=[ 108], 00:37:30.843 | 99.00th=[ 115], 99.50th=[ 127], 99.90th=[ 136], 99.95th=[ 144], 00:37:30.843 | 99.99th=[ 144] 00:37:30.843 bw ( KiB/s): min= 656, max= 1579, per=4.08%, avg=798.25, stdev=218.92, samples=20 00:37:30.843 iops : min= 164, max= 394, avg=199.50, stdev=54.58, samples=20 00:37:30.843 lat (msec) : 4=1.59%, 10=2.29%, 20=0.10%, 50=6.11%, 100=77.24% 00:37:30.843 lat (msec) : 250=12.67% 00:37:30.844 cpu : usr=36.83%, sys=0.92%, ctx=1068, majf=0, minf=9 00:37:30.844 IO depths : 1=0.2%, 2=0.7%, 4=2.3%, 8=79.6%, 16=17.2%, 32=0.0%, >=64=0.0% 00:37:30.844 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:30.844 complete : 0=0.0%, 4=88.9%, 8=10.6%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:30.844 issued rwts: total=2012,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:30.844 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:30.844 filename2: (groupid=0, jobs=1): err= 0: pid=74950: Wed Apr 17 08:34:02 2024 00:37:30.844 read: IOPS=199, BW=799KiB/s (818kB/s)(8012KiB/10033msec) 00:37:30.844 slat (usec): min=4, max=8043, avg=30.61, stdev=284.82 00:37:30.844 clat (msec): min=12, max=138, avg=79.98, stdev=20.89 00:37:30.844 lat (msec): min=12, max=138, avg=80.01, stdev=20.89 00:37:30.844 clat percentiles (msec): 00:37:30.844 | 1.00th=[ 24], 5.00th=[ 46], 10.00th=[ 57], 20.00th=[ 61], 00:37:30.844 | 30.00th=[ 67], 40.00th=[ 72], 50.00th=[ 85], 60.00th=[ 91], 00:37:30.844 | 70.00th=[ 95], 80.00th=[ 97], 90.00th=[ 105], 95.00th=[ 107], 00:37:30.844 | 99.00th=[ 129], 99.50th=[ 131], 99.90th=[ 136], 99.95th=[ 138], 00:37:30.844 | 99.99th=[ 138] 00:37:30.844 bw ( KiB/s): min= 680, max= 1048, per=4.06%, avg=794.80, stdev=109.51, samples=20 00:37:30.844 iops : min= 170, max= 262, avg=198.70, stdev=27.38, samples=20 00:37:30.844 lat (msec) : 20=0.80%, 50=6.39%, 100=77.68%, 250=15.13% 00:37:30.844 cpu : usr=38.05%, sys=0.80%, ctx=1118, majf=0, minf=9 00:37:30.844 IO depths : 1=0.1%, 2=0.5%, 4=2.1%, 8=80.5%, 16=16.7%, 32=0.0%, >=64=0.0% 00:37:30.844 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:30.844 complete : 0=0.0%, 4=88.4%, 8=11.2%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:30.844 issued rwts: total=2003,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:30.844 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:30.844 filename2: (groupid=0, jobs=1): err= 0: pid=74951: Wed Apr 17 08:34:02 2024 00:37:30.844 read: IOPS=202, BW=811KiB/s (831kB/s)(8140KiB/10035msec) 00:37:30.844 slat (usec): min=6, max=8022, avg=34.80, stdev=331.58 00:37:30.844 clat (msec): min=13, max=129, avg=78.67, stdev=20.18 00:37:30.844 lat (msec): min=13, max=129, avg=78.70, stdev=20.17 00:37:30.844 clat percentiles (msec): 00:37:30.844 | 1.00th=[ 27], 5.00th=[ 48], 10.00th=[ 52], 20.00th=[ 61], 00:37:30.844 | 30.00th=[ 66], 40.00th=[ 72], 50.00th=[ 83], 60.00th=[ 88], 00:37:30.844 | 70.00th=[ 94], 80.00th=[ 96], 90.00th=[ 104], 95.00th=[ 107], 00:37:30.844 | 99.00th=[ 112], 99.50th=[ 117], 99.90th=[ 124], 99.95th=[ 128], 00:37:30.844 | 99.99th=[ 130] 00:37:30.844 bw ( KiB/s): min= 688, max= 1088, per=4.14%, avg=810.40, stdev=115.47, samples=20 00:37:30.844 iops : min= 172, max= 272, avg=202.60, stdev=28.87, samples=20 00:37:30.844 lat (msec) : 20=0.79%, 50=7.96%, 100=78.13%, 250=13.12% 00:37:30.844 cpu : usr=39.48%, sys=0.92%, ctx=1173, majf=0, minf=9 00:37:30.844 IO depths : 1=0.1%, 2=0.4%, 4=1.7%, 8=81.1%, 16=16.7%, 32=0.0%, >=64=0.0% 00:37:30.844 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:30.844 complete : 0=0.0%, 4=88.2%, 8=11.4%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:30.844 issued rwts: total=2035,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:30.844 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:30.844 filename2: (groupid=0, jobs=1): err= 0: pid=74952: Wed Apr 17 08:34:02 2024 00:37:30.844 read: IOPS=217, BW=869KiB/s (889kB/s)(8688KiB/10002msec) 00:37:30.844 slat (usec): min=5, max=8058, avg=36.23, stdev=334.53 00:37:30.844 clat (usec): min=1387, max=135735, avg=73519.51, stdev=23076.96 00:37:30.844 lat (usec): min=1394, max=135763, avg=73555.74, stdev=23074.83 00:37:30.844 clat percentiles (usec): 00:37:30.844 | 1.00th=[ 1516], 5.00th=[ 36439], 10.00th=[ 47973], 20.00th=[ 56886], 00:37:30.844 | 30.00th=[ 61080], 40.00th=[ 66323], 50.00th=[ 71828], 60.00th=[ 82314], 00:37:30.844 | 70.00th=[ 89654], 80.00th=[ 94897], 90.00th=[101188], 95.00th=[106431], 00:37:30.844 | 99.00th=[111674], 99.50th=[115868], 99.90th=[126354], 99.95th=[135267], 00:37:30.844 | 99.99th=[135267] 00:37:30.844 bw ( KiB/s): min= 744, max= 1152, per=4.31%, avg=842.11, stdev=112.60, samples=19 00:37:30.844 iops : min= 186, max= 288, avg=210.53, stdev=28.15, samples=19 00:37:30.844 lat (msec) : 2=1.47%, 4=1.01%, 10=0.14%, 20=0.32%, 50=8.75% 00:37:30.844 lat (msec) : 100=76.01%, 250=12.29% 00:37:30.844 cpu : usr=36.33%, sys=0.81%, ctx=1207, majf=0, minf=9 00:37:30.844 IO depths : 1=0.1%, 2=0.5%, 4=1.4%, 8=82.1%, 16=15.8%, 32=0.0%, >=64=0.0% 00:37:30.844 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:30.844 complete : 0=0.0%, 4=87.4%, 8=12.3%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:30.844 issued rwts: total=2172,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:30.844 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:30.844 00:37:30.844 Run status group 0 (all jobs): 00:37:30.844 READ: bw=19.1MiB/s (20.0MB/s), 738KiB/s-869KiB/s (755kB/s-890kB/s), io=192MiB (201MB), run=10001-10051msec 00:37:30.844 08:34:02 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:37:30.844 08:34:02 -- target/dif.sh@43 -- # local sub 00:37:30.844 08:34:02 -- target/dif.sh@45 -- # for sub in "$@" 00:37:30.844 08:34:02 -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:30.844 08:34:02 -- target/dif.sh@36 -- # local sub_id=0 00:37:30.844 08:34:02 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:30.844 08:34:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:30.844 08:34:02 -- common/autotest_common.sh@10 -- # set +x 00:37:30.844 08:34:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:30.844 08:34:02 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:30.844 08:34:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:30.844 08:34:02 -- common/autotest_common.sh@10 -- # set +x 00:37:30.844 08:34:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:30.844 08:34:02 -- target/dif.sh@45 -- # for sub in "$@" 00:37:30.844 08:34:02 -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:30.844 08:34:02 -- target/dif.sh@36 -- # local sub_id=1 00:37:30.844 08:34:02 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:30.844 08:34:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:30.844 08:34:02 -- common/autotest_common.sh@10 -- # set +x 00:37:30.844 08:34:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:30.844 08:34:02 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:30.844 08:34:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:30.844 08:34:02 -- common/autotest_common.sh@10 -- # set +x 00:37:30.844 08:34:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:30.844 08:34:02 -- target/dif.sh@45 -- # for sub in "$@" 00:37:30.844 08:34:02 -- target/dif.sh@46 -- # destroy_subsystem 2 00:37:30.844 08:34:02 -- target/dif.sh@36 -- # local sub_id=2 00:37:30.844 08:34:02 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:37:30.844 08:34:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:30.844 08:34:02 -- common/autotest_common.sh@10 -- # set +x 00:37:30.844 08:34:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:30.844 08:34:02 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:37:30.844 08:34:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:30.844 08:34:02 -- common/autotest_common.sh@10 -- # set +x 00:37:30.844 08:34:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:30.844 08:34:02 -- target/dif.sh@115 -- # NULL_DIF=1 00:37:30.844 08:34:02 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:37:30.844 08:34:02 -- target/dif.sh@115 -- # numjobs=2 00:37:30.844 08:34:02 -- target/dif.sh@115 -- # iodepth=8 00:37:30.844 08:34:02 -- target/dif.sh@115 -- # runtime=5 00:37:30.844 08:34:02 -- target/dif.sh@115 -- # files=1 00:37:30.844 08:34:02 -- target/dif.sh@117 -- # create_subsystems 0 1 00:37:30.844 08:34:02 -- target/dif.sh@28 -- # local sub 00:37:30.844 08:34:02 -- target/dif.sh@30 -- # for sub in "$@" 00:37:30.844 08:34:02 -- target/dif.sh@31 -- # create_subsystem 0 00:37:30.844 08:34:02 -- target/dif.sh@18 -- # local sub_id=0 00:37:30.844 08:34:02 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:37:30.844 08:34:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:30.844 08:34:02 -- common/autotest_common.sh@10 -- # set +x 00:37:30.844 bdev_null0 00:37:30.844 08:34:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:30.844 08:34:02 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:30.844 08:34:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:30.844 08:34:02 -- common/autotest_common.sh@10 -- # set +x 00:37:30.844 08:34:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:30.844 08:34:02 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:30.844 08:34:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:30.844 08:34:02 -- common/autotest_common.sh@10 -- # set +x 00:37:30.844 08:34:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:30.844 08:34:02 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:30.844 08:34:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:30.844 08:34:02 -- common/autotest_common.sh@10 -- # set +x 00:37:30.844 [2024-04-17 08:34:02.406558] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:30.844 08:34:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:30.844 08:34:02 -- target/dif.sh@30 -- # for sub in "$@" 00:37:30.844 08:34:02 -- target/dif.sh@31 -- # create_subsystem 1 00:37:30.844 08:34:02 -- target/dif.sh@18 -- # local sub_id=1 00:37:30.844 08:34:02 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:37:30.844 08:34:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:30.844 08:34:02 -- common/autotest_common.sh@10 -- # set +x 00:37:30.844 bdev_null1 00:37:30.844 08:34:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:30.844 08:34:02 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:30.844 08:34:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:30.844 08:34:02 -- common/autotest_common.sh@10 -- # set +x 00:37:30.844 08:34:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:30.844 08:34:02 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:30.844 08:34:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:30.844 08:34:02 -- common/autotest_common.sh@10 -- # set +x 00:37:30.844 08:34:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:30.844 08:34:02 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:30.844 08:34:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:30.844 08:34:02 -- common/autotest_common.sh@10 -- # set +x 00:37:30.844 08:34:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:30.844 08:34:02 -- target/dif.sh@118 -- # fio /dev/fd/62 00:37:30.844 08:34:02 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:37:30.844 08:34:02 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:37:30.845 08:34:02 -- nvmf/common.sh@520 -- # config=() 00:37:30.845 08:34:02 -- nvmf/common.sh@520 -- # local subsystem config 00:37:30.845 08:34:02 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:37:30.845 08:34:02 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:30.845 08:34:02 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:37:30.845 { 00:37:30.845 "params": { 00:37:30.845 "name": "Nvme$subsystem", 00:37:30.845 "trtype": "$TEST_TRANSPORT", 00:37:30.845 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:30.845 "adrfam": "ipv4", 00:37:30.845 "trsvcid": "$NVMF_PORT", 00:37:30.845 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:30.845 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:30.845 "hdgst": ${hdgst:-false}, 00:37:30.845 "ddgst": ${ddgst:-false} 00:37:30.845 }, 00:37:30.845 "method": "bdev_nvme_attach_controller" 00:37:30.845 } 00:37:30.845 EOF 00:37:30.845 )") 00:37:30.845 08:34:02 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:30.845 08:34:02 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:37:30.845 08:34:02 -- target/dif.sh@82 -- # gen_fio_conf 00:37:30.845 08:34:02 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:30.845 08:34:02 -- common/autotest_common.sh@1318 -- # local sanitizers 00:37:30.845 08:34:02 -- target/dif.sh@54 -- # local file 00:37:30.845 08:34:02 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:37:30.845 08:34:02 -- common/autotest_common.sh@1320 -- # shift 00:37:30.845 08:34:02 -- target/dif.sh@56 -- # cat 00:37:30.845 08:34:02 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:37:30.845 08:34:02 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:37:30.845 08:34:02 -- nvmf/common.sh@542 -- # cat 00:37:30.845 08:34:02 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:37:30.845 08:34:02 -- common/autotest_common.sh@1324 -- # grep libasan 00:37:30.845 08:34:02 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:37:30.845 08:34:02 -- target/dif.sh@72 -- # (( file = 1 )) 00:37:30.845 08:34:02 -- target/dif.sh@72 -- # (( file <= files )) 00:37:30.845 08:34:02 -- target/dif.sh@73 -- # cat 00:37:30.845 08:34:02 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:37:30.845 08:34:02 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:37:30.845 { 00:37:30.845 "params": { 00:37:30.845 "name": "Nvme$subsystem", 00:37:30.845 "trtype": "$TEST_TRANSPORT", 00:37:30.845 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:30.845 "adrfam": "ipv4", 00:37:30.845 "trsvcid": "$NVMF_PORT", 00:37:30.845 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:30.845 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:30.845 "hdgst": ${hdgst:-false}, 00:37:30.845 "ddgst": ${ddgst:-false} 00:37:30.845 }, 00:37:30.845 "method": "bdev_nvme_attach_controller" 00:37:30.845 } 00:37:30.845 EOF 00:37:30.845 )") 00:37:30.845 08:34:02 -- nvmf/common.sh@542 -- # cat 00:37:30.845 08:34:02 -- target/dif.sh@72 -- # (( file++ )) 00:37:30.845 08:34:02 -- target/dif.sh@72 -- # (( file <= files )) 00:37:30.845 08:34:02 -- nvmf/common.sh@544 -- # jq . 00:37:30.845 08:34:02 -- nvmf/common.sh@545 -- # IFS=, 00:37:30.845 08:34:02 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:37:30.845 "params": { 00:37:30.845 "name": "Nvme0", 00:37:30.845 "trtype": "tcp", 00:37:30.845 "traddr": "10.0.0.2", 00:37:30.845 "adrfam": "ipv4", 00:37:30.845 "trsvcid": "4420", 00:37:30.845 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:30.845 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:30.845 "hdgst": false, 00:37:30.845 "ddgst": false 00:37:30.845 }, 00:37:30.845 "method": "bdev_nvme_attach_controller" 00:37:30.845 },{ 00:37:30.845 "params": { 00:37:30.845 "name": "Nvme1", 00:37:30.845 "trtype": "tcp", 00:37:30.845 "traddr": "10.0.0.2", 00:37:30.845 "adrfam": "ipv4", 00:37:30.845 "trsvcid": "4420", 00:37:30.845 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:30.845 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:30.845 "hdgst": false, 00:37:30.845 "ddgst": false 00:37:30.845 }, 00:37:30.845 "method": "bdev_nvme_attach_controller" 00:37:30.845 }' 00:37:30.845 08:34:02 -- common/autotest_common.sh@1324 -- # asan_lib= 00:37:30.845 08:34:02 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:37:30.845 08:34:02 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:37:30.845 08:34:02 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:37:30.845 08:34:02 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:37:30.845 08:34:02 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:37:30.845 08:34:02 -- common/autotest_common.sh@1324 -- # asan_lib= 00:37:30.845 08:34:02 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:37:30.845 08:34:02 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:37:30.845 08:34:02 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:30.845 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:37:30.845 ... 00:37:30.845 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:37:30.845 ... 00:37:30.845 fio-3.35 00:37:30.845 Starting 4 threads 00:37:30.845 [2024-04-17 08:34:03.101503] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:37:30.845 [2024-04-17 08:34:03.101546] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:37:35.041 00:37:35.041 filename0: (groupid=0, jobs=1): err= 0: pid=75104: Wed Apr 17 08:34:08 2024 00:37:35.041 read: IOPS=2483, BW=19.4MiB/s (20.3MB/s)(97.1MiB/5002msec) 00:37:35.041 slat (nsec): min=5691, max=55439, avg=13346.83, stdev=7956.72 00:37:35.041 clat (usec): min=449, max=6228, avg=3164.14, stdev=453.61 00:37:35.041 lat (usec): min=460, max=6235, avg=3177.48, stdev=453.63 00:37:35.041 clat percentiles (usec): 00:37:35.041 | 1.00th=[ 1565], 5.00th=[ 2245], 10.00th=[ 2900], 20.00th=[ 2966], 00:37:35.041 | 30.00th=[ 3032], 40.00th=[ 3097], 50.00th=[ 3163], 60.00th=[ 3261], 00:37:35.041 | 70.00th=[ 3326], 80.00th=[ 3425], 90.00th=[ 3589], 95.00th=[ 3785], 00:37:35.041 | 99.00th=[ 4293], 99.50th=[ 4621], 99.90th=[ 5473], 99.95th=[ 6063], 00:37:35.041 | 99.99th=[ 6063] 00:37:35.041 bw ( KiB/s): min=18944, max=22912, per=25.09%, avg=19866.67, stdev=1248.72, samples=9 00:37:35.041 iops : min= 2368, max= 2864, avg=2483.33, stdev=156.09, samples=9 00:37:35.041 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.09% 00:37:35.041 lat (msec) : 2=3.58%, 4=94.16%, 10=2.14% 00:37:35.041 cpu : usr=95.04%, sys=4.32%, ctx=8, majf=0, minf=0 00:37:35.041 IO depths : 1=6.6%, 2=21.2%, 4=52.5%, 8=19.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:35.041 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:35.041 complete : 0=0.0%, 4=90.8%, 8=9.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:35.041 issued rwts: total=12423,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:35.041 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:35.041 filename0: (groupid=0, jobs=1): err= 0: pid=75105: Wed Apr 17 08:34:08 2024 00:37:35.041 read: IOPS=2470, BW=19.3MiB/s (20.2MB/s)(96.5MiB/5002msec) 00:37:35.041 slat (nsec): min=6059, max=85886, avg=22661.97, stdev=11459.16 00:37:35.041 clat (usec): min=1166, max=6106, avg=3137.98, stdev=419.66 00:37:35.041 lat (usec): min=1179, max=6137, avg=3160.64, stdev=420.64 00:37:35.041 clat percentiles (usec): 00:37:35.041 | 1.00th=[ 1614], 5.00th=[ 2573], 10.00th=[ 2835], 20.00th=[ 2933], 00:37:35.041 | 30.00th=[ 2999], 40.00th=[ 3064], 50.00th=[ 3130], 60.00th=[ 3195], 00:37:35.041 | 70.00th=[ 3294], 80.00th=[ 3392], 90.00th=[ 3556], 95.00th=[ 3752], 00:37:35.041 | 99.00th=[ 4178], 99.50th=[ 4359], 99.90th=[ 5669], 99.95th=[ 6063], 00:37:35.041 | 99.99th=[ 6128] 00:37:35.041 bw ( KiB/s): min=18112, max=21744, per=24.92%, avg=19733.33, stdev=968.66, samples=9 00:37:35.041 iops : min= 2264, max= 2718, avg=2466.67, stdev=121.08, samples=9 00:37:35.041 lat (msec) : 2=2.89%, 4=95.08%, 10=2.03% 00:37:35.041 cpu : usr=97.64%, sys=1.76%, ctx=8, majf=0, minf=1 00:37:35.041 IO depths : 1=7.5%, 2=20.5%, 4=53.2%, 8=18.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:35.041 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:35.041 complete : 0=0.0%, 4=90.8%, 8=9.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:35.041 issued rwts: total=12356,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:35.041 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:35.041 filename1: (groupid=0, jobs=1): err= 0: pid=75106: Wed Apr 17 08:34:08 2024 00:37:35.041 read: IOPS=2474, BW=19.3MiB/s (20.3MB/s)(96.7MiB/5001msec) 00:37:35.041 slat (usec): min=6, max=112, avg=21.71, stdev=11.09 00:37:35.041 clat (usec): min=389, max=5960, avg=3136.57, stdev=477.63 00:37:35.041 lat (usec): min=395, max=5969, avg=3158.27, stdev=478.91 00:37:35.041 clat percentiles (usec): 00:37:35.041 | 1.00th=[ 922], 5.00th=[ 2671], 10.00th=[ 2835], 20.00th=[ 2933], 00:37:35.041 | 30.00th=[ 2999], 40.00th=[ 3064], 50.00th=[ 3130], 60.00th=[ 3195], 00:37:35.041 | 70.00th=[ 3294], 80.00th=[ 3392], 90.00th=[ 3589], 95.00th=[ 3818], 00:37:35.041 | 99.00th=[ 4293], 99.50th=[ 4490], 99.90th=[ 5342], 99.95th=[ 5538], 00:37:35.041 | 99.99th=[ 5932] 00:37:35.041 bw ( KiB/s): min=18112, max=21680, per=24.98%, avg=19775.11, stdev=1214.05, samples=9 00:37:35.041 iops : min= 2264, max= 2710, avg=2471.89, stdev=151.76, samples=9 00:37:35.041 lat (usec) : 500=0.02%, 750=0.08%, 1000=1.10% 00:37:35.041 lat (msec) : 2=1.82%, 4=94.48%, 10=2.50% 00:37:35.041 cpu : usr=97.70%, sys=1.68%, ctx=11, majf=0, minf=9 00:37:35.041 IO depths : 1=7.4%, 2=19.6%, 4=54.1%, 8=19.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:35.041 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:35.041 complete : 0=0.0%, 4=90.8%, 8=9.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:35.041 issued rwts: total=12377,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:35.041 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:35.041 filename1: (groupid=0, jobs=1): err= 0: pid=75107: Wed Apr 17 08:34:08 2024 00:37:35.041 read: IOPS=2468, BW=19.3MiB/s (20.2MB/s)(96.5MiB/5001msec) 00:37:35.041 slat (usec): min=6, max=111, avg=21.97, stdev=11.32 00:37:35.041 clat (usec): min=527, max=5870, avg=3141.59, stdev=461.75 00:37:35.041 lat (usec): min=533, max=5900, avg=3163.56, stdev=463.10 00:37:35.041 clat percentiles (usec): 00:37:35.041 | 1.00th=[ 898], 5.00th=[ 2704], 10.00th=[ 2835], 20.00th=[ 2933], 00:37:35.041 | 30.00th=[ 2999], 40.00th=[ 3064], 50.00th=[ 3130], 60.00th=[ 3195], 00:37:35.041 | 70.00th=[ 3294], 80.00th=[ 3392], 90.00th=[ 3556], 95.00th=[ 3785], 00:37:35.041 | 99.00th=[ 4293], 99.50th=[ 4555], 99.90th=[ 5145], 99.95th=[ 5604], 00:37:35.041 | 99.99th=[ 5669] 00:37:35.041 bw ( KiB/s): min=19072, max=21904, per=24.90%, avg=19715.56, stdev=930.13, samples=9 00:37:35.041 iops : min= 2384, max= 2738, avg=2464.44, stdev=116.27, samples=9 00:37:35.041 lat (usec) : 750=0.11%, 1000=1.18% 00:37:35.041 lat (msec) : 2=1.52%, 4=94.62%, 10=2.56% 00:37:35.041 cpu : usr=97.10%, sys=2.22%, ctx=68, majf=0, minf=9 00:37:35.041 IO depths : 1=7.5%, 2=20.4%, 4=53.4%, 8=18.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:35.041 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:35.041 complete : 0=0.0%, 4=90.6%, 8=9.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:35.041 issued rwts: total=12346,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:35.041 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:35.041 00:37:35.041 Run status group 0 (all jobs): 00:37:35.041 READ: bw=77.3MiB/s (81.1MB/s), 19.3MiB/s-19.4MiB/s (20.2MB/s-20.3MB/s), io=387MiB (406MB), run=5001-5002msec 00:37:35.301 08:34:08 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:37:35.301 08:34:08 -- target/dif.sh@43 -- # local sub 00:37:35.301 08:34:08 -- target/dif.sh@45 -- # for sub in "$@" 00:37:35.301 08:34:08 -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:35.301 08:34:08 -- target/dif.sh@36 -- # local sub_id=0 00:37:35.301 08:34:08 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:35.301 08:34:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:35.301 08:34:08 -- common/autotest_common.sh@10 -- # set +x 00:37:35.301 08:34:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:35.301 08:34:08 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:35.301 08:34:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:35.301 08:34:08 -- common/autotest_common.sh@10 -- # set +x 00:37:35.301 08:34:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:35.301 08:34:08 -- target/dif.sh@45 -- # for sub in "$@" 00:37:35.301 08:34:08 -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:35.301 08:34:08 -- target/dif.sh@36 -- # local sub_id=1 00:37:35.301 08:34:08 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:35.301 08:34:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:35.301 08:34:08 -- common/autotest_common.sh@10 -- # set +x 00:37:35.301 08:34:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:35.301 08:34:08 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:35.301 08:34:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:35.301 08:34:08 -- common/autotest_common.sh@10 -- # set +x 00:37:35.301 08:34:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:35.301 00:37:35.301 real 0m23.575s 00:37:35.301 user 2m9.185s 00:37:35.301 sys 0m4.054s 00:37:35.301 08:34:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:35.301 08:34:08 -- common/autotest_common.sh@10 -- # set +x 00:37:35.301 ************************************ 00:37:35.301 END TEST fio_dif_rand_params 00:37:35.301 ************************************ 00:37:35.301 08:34:08 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:37:35.301 08:34:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:37:35.301 08:34:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:37:35.301 08:34:08 -- common/autotest_common.sh@10 -- # set +x 00:37:35.301 ************************************ 00:37:35.301 START TEST fio_dif_digest 00:37:35.301 ************************************ 00:37:35.301 08:34:08 -- common/autotest_common.sh@1104 -- # fio_dif_digest 00:37:35.301 08:34:08 -- target/dif.sh@123 -- # local NULL_DIF 00:37:35.301 08:34:08 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:37:35.301 08:34:08 -- target/dif.sh@125 -- # local hdgst ddgst 00:37:35.301 08:34:08 -- target/dif.sh@127 -- # NULL_DIF=3 00:37:35.301 08:34:08 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:37:35.301 08:34:08 -- target/dif.sh@127 -- # numjobs=3 00:37:35.301 08:34:08 -- target/dif.sh@127 -- # iodepth=3 00:37:35.301 08:34:08 -- target/dif.sh@127 -- # runtime=10 00:37:35.301 08:34:08 -- target/dif.sh@128 -- # hdgst=true 00:37:35.301 08:34:08 -- target/dif.sh@128 -- # ddgst=true 00:37:35.301 08:34:08 -- target/dif.sh@130 -- # create_subsystems 0 00:37:35.301 08:34:08 -- target/dif.sh@28 -- # local sub 00:37:35.301 08:34:08 -- target/dif.sh@30 -- # for sub in "$@" 00:37:35.301 08:34:08 -- target/dif.sh@31 -- # create_subsystem 0 00:37:35.301 08:34:08 -- target/dif.sh@18 -- # local sub_id=0 00:37:35.301 08:34:08 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:37:35.301 08:34:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:35.301 08:34:08 -- common/autotest_common.sh@10 -- # set +x 00:37:35.301 bdev_null0 00:37:35.301 08:34:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:35.301 08:34:08 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:35.301 08:34:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:35.302 08:34:08 -- common/autotest_common.sh@10 -- # set +x 00:37:35.302 08:34:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:35.302 08:34:08 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:35.302 08:34:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:35.302 08:34:08 -- common/autotest_common.sh@10 -- # set +x 00:37:35.302 08:34:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:35.302 08:34:08 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:35.302 08:34:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:35.302 08:34:08 -- common/autotest_common.sh@10 -- # set +x 00:37:35.302 [2024-04-17 08:34:08.604158] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:35.302 08:34:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:35.302 08:34:08 -- target/dif.sh@131 -- # fio /dev/fd/62 00:37:35.302 08:34:08 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:37:35.302 08:34:08 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:37:35.302 08:34:08 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:35.302 08:34:08 -- nvmf/common.sh@520 -- # config=() 00:37:35.302 08:34:08 -- nvmf/common.sh@520 -- # local subsystem config 00:37:35.302 08:34:08 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:35.302 08:34:08 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:37:35.302 08:34:08 -- target/dif.sh@82 -- # gen_fio_conf 00:37:35.302 08:34:08 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:37:35.302 { 00:37:35.302 "params": { 00:37:35.302 "name": "Nvme$subsystem", 00:37:35.302 "trtype": "$TEST_TRANSPORT", 00:37:35.302 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:35.302 "adrfam": "ipv4", 00:37:35.302 "trsvcid": "$NVMF_PORT", 00:37:35.302 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:35.302 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:35.302 "hdgst": ${hdgst:-false}, 00:37:35.302 "ddgst": ${ddgst:-false} 00:37:35.302 }, 00:37:35.302 "method": "bdev_nvme_attach_controller" 00:37:35.302 } 00:37:35.302 EOF 00:37:35.302 )") 00:37:35.302 08:34:08 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:37:35.302 08:34:08 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:35.302 08:34:08 -- target/dif.sh@54 -- # local file 00:37:35.302 08:34:08 -- common/autotest_common.sh@1318 -- # local sanitizers 00:37:35.302 08:34:08 -- target/dif.sh@56 -- # cat 00:37:35.302 08:34:08 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:37:35.302 08:34:08 -- common/autotest_common.sh@1320 -- # shift 00:37:35.302 08:34:08 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:37:35.302 08:34:08 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:37:35.302 08:34:08 -- nvmf/common.sh@542 -- # cat 00:37:35.302 08:34:08 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:37:35.302 08:34:08 -- common/autotest_common.sh@1324 -- # grep libasan 00:37:35.302 08:34:08 -- target/dif.sh@72 -- # (( file = 1 )) 00:37:35.302 08:34:08 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:37:35.302 08:34:08 -- target/dif.sh@72 -- # (( file <= files )) 00:37:35.302 08:34:08 -- nvmf/common.sh@544 -- # jq . 00:37:35.302 08:34:08 -- nvmf/common.sh@545 -- # IFS=, 00:37:35.302 08:34:08 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:37:35.302 "params": { 00:37:35.302 "name": "Nvme0", 00:37:35.302 "trtype": "tcp", 00:37:35.302 "traddr": "10.0.0.2", 00:37:35.302 "adrfam": "ipv4", 00:37:35.302 "trsvcid": "4420", 00:37:35.302 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:35.302 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:35.302 "hdgst": true, 00:37:35.302 "ddgst": true 00:37:35.302 }, 00:37:35.302 "method": "bdev_nvme_attach_controller" 00:37:35.302 }' 00:37:35.561 08:34:08 -- common/autotest_common.sh@1324 -- # asan_lib= 00:37:35.561 08:34:08 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:37:35.561 08:34:08 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:37:35.561 08:34:08 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:37:35.561 08:34:08 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:37:35.561 08:34:08 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:37:35.561 08:34:08 -- common/autotest_common.sh@1324 -- # asan_lib= 00:37:35.561 08:34:08 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:37:35.561 08:34:08 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:37:35.561 08:34:08 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:35.561 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:37:35.561 ... 00:37:35.561 fio-3.35 00:37:35.561 Starting 3 threads 00:37:36.129 [2024-04-17 08:34:09.194618] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:37:36.129 [2024-04-17 08:34:09.194660] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:37:46.110 00:37:46.110 filename0: (groupid=0, jobs=1): err= 0: pid=75210: Wed Apr 17 08:34:19 2024 00:37:46.110 read: IOPS=271, BW=33.9MiB/s (35.5MB/s)(339MiB/10006msec) 00:37:46.110 slat (nsec): min=6424, max=36967, avg=10226.97, stdev=3891.73 00:37:46.110 clat (usec): min=9952, max=13993, avg=11042.37, stdev=756.85 00:37:46.110 lat (usec): min=9959, max=14013, avg=11052.60, stdev=757.44 00:37:46.110 clat percentiles (usec): 00:37:46.110 | 1.00th=[10028], 5.00th=[10028], 10.00th=[10159], 20.00th=[10290], 00:37:46.110 | 30.00th=[10421], 40.00th=[10683], 50.00th=[11076], 60.00th=[11207], 00:37:46.110 | 70.00th=[11469], 80.00th=[11600], 90.00th=[11994], 95.00th=[12387], 00:37:46.110 | 99.00th=[13304], 99.50th=[13829], 99.90th=[13960], 99.95th=[13960], 00:37:46.110 | 99.99th=[13960] 00:37:46.110 bw ( KiB/s): min=33024, max=36864, per=33.28%, avg=34637.11, stdev=1167.99, samples=19 00:37:46.110 iops : min= 258, max= 288, avg=270.58, stdev= 9.11, samples=19 00:37:46.110 lat (msec) : 10=0.37%, 20=99.63% 00:37:46.110 cpu : usr=94.62%, sys=4.96%, ctx=18, majf=0, minf=0 00:37:46.110 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:46.110 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:46.110 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:46.110 issued rwts: total=2712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:46.110 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:46.110 filename0: (groupid=0, jobs=1): err= 0: pid=75211: Wed Apr 17 08:34:19 2024 00:37:46.110 read: IOPS=271, BW=33.9MiB/s (35.5MB/s)(339MiB/10006msec) 00:37:46.110 slat (nsec): min=6476, max=87088, avg=10942.45, stdev=5165.67 00:37:46.110 clat (usec): min=6339, max=14474, avg=11040.49, stdev=774.71 00:37:46.110 lat (usec): min=6346, max=14490, avg=11051.43, stdev=775.49 00:37:46.110 clat percentiles (usec): 00:37:46.110 | 1.00th=[10028], 5.00th=[10028], 10.00th=[10159], 20.00th=[10290], 00:37:46.110 | 30.00th=[10421], 40.00th=[10683], 50.00th=[10945], 60.00th=[11207], 00:37:46.110 | 70.00th=[11469], 80.00th=[11600], 90.00th=[11994], 95.00th=[12387], 00:37:46.110 | 99.00th=[13435], 99.50th=[13829], 99.90th=[14484], 99.95th=[14484], 00:37:46.110 | 99.99th=[14484] 00:37:46.110 bw ( KiB/s): min=32256, max=36096, per=33.25%, avg=34604.11, stdev=1242.69, samples=19 00:37:46.110 iops : min= 252, max= 282, avg=270.32, stdev= 9.69, samples=19 00:37:46.110 lat (msec) : 10=0.48%, 20=99.52% 00:37:46.110 cpu : usr=94.27%, sys=5.20%, ctx=151, majf=0, minf=9 00:37:46.110 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:46.110 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:46.110 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:46.110 issued rwts: total=2712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:46.110 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:46.110 filename0: (groupid=0, jobs=1): err= 0: pid=75212: Wed Apr 17 08:34:19 2024 00:37:46.110 read: IOPS=271, BW=33.9MiB/s (35.5MB/s)(339MiB/10005msec) 00:37:46.110 slat (nsec): min=6412, max=43172, avg=10848.07, stdev=4733.92 00:37:46.110 clat (usec): min=7580, max=14398, avg=11039.61, stdev=771.72 00:37:46.110 lat (usec): min=7587, max=14425, avg=11050.45, stdev=772.65 00:37:46.110 clat percentiles (usec): 00:37:46.110 | 1.00th=[10028], 5.00th=[10028], 10.00th=[10159], 20.00th=[10290], 00:37:46.110 | 30.00th=[10421], 40.00th=[10683], 50.00th=[10945], 60.00th=[11207], 00:37:46.110 | 70.00th=[11469], 80.00th=[11600], 90.00th=[11994], 95.00th=[12256], 00:37:46.110 | 99.00th=[13566], 99.50th=[13829], 99.90th=[14353], 99.95th=[14353], 00:37:46.110 | 99.99th=[14353] 00:37:46.110 bw ( KiB/s): min=32256, max=36096, per=33.29%, avg=34648.16, stdev=1227.04, samples=19 00:37:46.110 iops : min= 252, max= 282, avg=270.63, stdev= 9.57, samples=19 00:37:46.110 lat (msec) : 10=0.48%, 20=99.52% 00:37:46.110 cpu : usr=94.48%, sys=5.08%, ctx=11, majf=0, minf=9 00:37:46.110 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:46.110 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:46.110 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:46.110 issued rwts: total=2712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:46.110 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:46.110 00:37:46.110 Run status group 0 (all jobs): 00:37:46.110 READ: bw=102MiB/s (107MB/s), 33.9MiB/s-33.9MiB/s (35.5MB/s-35.5MB/s), io=1017MiB (1066MB), run=10005-10006msec 00:37:46.370 08:34:19 -- target/dif.sh@132 -- # destroy_subsystems 0 00:37:46.370 08:34:19 -- target/dif.sh@43 -- # local sub 00:37:46.370 08:34:19 -- target/dif.sh@45 -- # for sub in "$@" 00:37:46.370 08:34:19 -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:46.370 08:34:19 -- target/dif.sh@36 -- # local sub_id=0 00:37:46.370 08:34:19 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:46.370 08:34:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:46.370 08:34:19 -- common/autotest_common.sh@10 -- # set +x 00:37:46.370 08:34:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:46.370 08:34:19 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:46.370 08:34:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:46.370 08:34:19 -- common/autotest_common.sh@10 -- # set +x 00:37:46.370 08:34:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:46.370 00:37:46.370 real 0m10.993s 00:37:46.370 user 0m29.034s 00:37:46.370 sys 0m1.807s 00:37:46.370 08:34:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:46.370 08:34:19 -- common/autotest_common.sh@10 -- # set +x 00:37:46.370 ************************************ 00:37:46.370 END TEST fio_dif_digest 00:37:46.370 ************************************ 00:37:46.370 08:34:19 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:37:46.370 08:34:19 -- target/dif.sh@147 -- # nvmftestfini 00:37:46.370 08:34:19 -- nvmf/common.sh@476 -- # nvmfcleanup 00:37:46.370 08:34:19 -- nvmf/common.sh@116 -- # sync 00:37:46.370 08:34:19 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:37:46.370 08:34:19 -- nvmf/common.sh@119 -- # set +e 00:37:46.370 08:34:19 -- nvmf/common.sh@120 -- # for i in {1..20} 00:37:46.370 08:34:19 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:37:46.370 rmmod nvme_tcp 00:37:46.370 rmmod nvme_fabrics 00:37:46.370 rmmod nvme_keyring 00:37:46.370 08:34:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:37:46.370 08:34:19 -- nvmf/common.sh@123 -- # set -e 00:37:46.370 08:34:19 -- nvmf/common.sh@124 -- # return 0 00:37:46.370 08:34:19 -- nvmf/common.sh@477 -- # '[' -n 74439 ']' 00:37:46.370 08:34:19 -- nvmf/common.sh@478 -- # killprocess 74439 00:37:46.370 08:34:19 -- common/autotest_common.sh@926 -- # '[' -z 74439 ']' 00:37:46.370 08:34:19 -- common/autotest_common.sh@930 -- # kill -0 74439 00:37:46.370 08:34:19 -- common/autotest_common.sh@931 -- # uname 00:37:46.370 08:34:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:37:46.370 08:34:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 74439 00:37:46.630 08:34:19 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:37:46.630 08:34:19 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:37:46.630 killing process with pid 74439 00:37:46.630 08:34:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 74439' 00:37:46.630 08:34:19 -- common/autotest_common.sh@945 -- # kill 74439 00:37:46.630 08:34:19 -- common/autotest_common.sh@950 -- # wait 74439 00:37:46.630 08:34:19 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:37:46.630 08:34:19 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:37:47.199 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:37:47.199 Waiting for block devices as requested 00:37:47.199 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:37:47.458 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:37:47.458 08:34:20 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:37:47.458 08:34:20 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:37:47.458 08:34:20 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:47.458 08:34:20 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:37:47.458 08:34:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:47.458 08:34:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:47.458 08:34:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:47.459 08:34:20 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:37:47.459 00:37:47.459 real 0m59.705s 00:37:47.459 user 3m54.959s 00:37:47.459 sys 0m13.251s 00:37:47.459 08:34:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:47.459 08:34:20 -- common/autotest_common.sh@10 -- # set +x 00:37:47.459 ************************************ 00:37:47.459 END TEST nvmf_dif 00:37:47.459 ************************************ 00:37:47.459 08:34:20 -- spdk/autotest.sh@301 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:37:47.459 08:34:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:37:47.459 08:34:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:37:47.459 08:34:20 -- common/autotest_common.sh@10 -- # set +x 00:37:47.459 ************************************ 00:37:47.459 START TEST nvmf_abort_qd_sizes 00:37:47.459 ************************************ 00:37:47.459 08:34:20 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:37:47.718 * Looking for test storage... 00:37:47.718 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:37:47.718 08:34:20 -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:37:47.718 08:34:20 -- nvmf/common.sh@7 -- # uname -s 00:37:47.718 08:34:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:47.718 08:34:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:47.718 08:34:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:47.718 08:34:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:47.718 08:34:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:47.718 08:34:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:47.718 08:34:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:47.718 08:34:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:47.718 08:34:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:47.718 08:34:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:47.718 08:34:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ce38300-f67f-48af-81f9-d51a7c54746d 00:37:47.718 08:34:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ce38300-f67f-48af-81f9-d51a7c54746d 00:37:47.718 08:34:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:47.718 08:34:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:47.718 08:34:20 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:37:47.718 08:34:20 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:37:47.718 08:34:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:47.718 08:34:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:47.718 08:34:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:47.718 08:34:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:47.718 08:34:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:47.718 08:34:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:47.718 08:34:20 -- paths/export.sh@5 -- # export PATH 00:37:47.718 08:34:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:47.718 08:34:20 -- nvmf/common.sh@46 -- # : 0 00:37:47.718 08:34:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:37:47.718 08:34:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:37:47.719 08:34:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:37:47.719 08:34:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:47.719 08:34:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:47.719 08:34:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:37:47.719 08:34:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:37:47.719 08:34:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:37:47.719 08:34:20 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:37:47.719 08:34:20 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:37:47.719 08:34:20 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:47.719 08:34:20 -- nvmf/common.sh@436 -- # prepare_net_devs 00:37:47.719 08:34:20 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:37:47.719 08:34:20 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:37:47.719 08:34:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:47.719 08:34:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:47.719 08:34:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:47.719 08:34:20 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:37:47.719 08:34:20 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:37:47.719 08:34:20 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:37:47.719 08:34:20 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:37:47.719 08:34:20 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:37:47.719 08:34:20 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:37:47.719 08:34:20 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:47.719 08:34:20 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:47.719 08:34:20 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:37:47.719 08:34:20 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:37:47.719 08:34:20 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:37:47.719 08:34:20 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:37:47.719 08:34:20 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:37:47.719 08:34:20 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:47.719 08:34:20 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:37:47.719 08:34:20 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:37:47.719 08:34:20 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:37:47.719 08:34:20 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:37:47.719 08:34:20 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:37:47.719 08:34:20 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:37:47.719 Cannot find device "nvmf_tgt_br" 00:37:47.719 08:34:20 -- nvmf/common.sh@154 -- # true 00:37:47.719 08:34:20 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:37:47.719 Cannot find device "nvmf_tgt_br2" 00:37:47.719 08:34:20 -- nvmf/common.sh@155 -- # true 00:37:47.719 08:34:20 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:37:47.719 08:34:20 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:37:47.719 Cannot find device "nvmf_tgt_br" 00:37:47.719 08:34:20 -- nvmf/common.sh@157 -- # true 00:37:47.719 08:34:20 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:37:47.719 Cannot find device "nvmf_tgt_br2" 00:37:47.719 08:34:21 -- nvmf/common.sh@158 -- # true 00:37:47.719 08:34:21 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:37:47.719 08:34:21 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:37:47.989 08:34:21 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:37:47.989 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:37:47.989 08:34:21 -- nvmf/common.sh@161 -- # true 00:37:47.989 08:34:21 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:37:47.990 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:37:47.990 08:34:21 -- nvmf/common.sh@162 -- # true 00:37:47.990 08:34:21 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:37:47.990 08:34:21 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:37:47.990 08:34:21 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:37:47.990 08:34:21 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:37:47.990 08:34:21 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:37:47.990 08:34:21 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:37:47.990 08:34:21 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:37:47.990 08:34:21 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:37:47.990 08:34:21 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:37:47.990 08:34:21 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:37:47.990 08:34:21 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:37:47.990 08:34:21 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:37:47.990 08:34:21 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:37:47.990 08:34:21 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:37:47.990 08:34:21 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:37:47.990 08:34:21 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:37:47.990 08:34:21 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:37:47.990 08:34:21 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:37:47.990 08:34:21 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:37:47.990 08:34:21 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:37:47.990 08:34:21 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:37:47.990 08:34:21 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:37:47.990 08:34:21 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:37:47.990 08:34:21 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:37:47.990 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:47.990 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:37:47.990 00:37:47.990 --- 10.0.0.2 ping statistics --- 00:37:47.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:47.990 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:37:47.990 08:34:21 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:37:47.990 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:37:47.990 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:37:47.990 00:37:47.990 --- 10.0.0.3 ping statistics --- 00:37:47.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:47.990 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:37:47.990 08:34:21 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:37:47.990 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:47.990 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:37:47.990 00:37:47.990 --- 10.0.0.1 ping statistics --- 00:37:47.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:47.990 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:37:47.990 08:34:21 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:47.990 08:34:21 -- nvmf/common.sh@421 -- # return 0 00:37:47.990 08:34:21 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:37:47.990 08:34:21 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:37:48.934 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:37:48.934 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:37:48.934 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:37:48.934 08:34:22 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:48.934 08:34:22 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:37:48.934 08:34:22 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:37:48.934 08:34:22 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:48.934 08:34:22 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:37:48.934 08:34:22 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:37:48.934 08:34:22 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:37:48.934 08:34:22 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:37:48.934 08:34:22 -- common/autotest_common.sh@712 -- # xtrace_disable 00:37:48.934 08:34:22 -- common/autotest_common.sh@10 -- # set +x 00:37:49.194 08:34:22 -- nvmf/common.sh@469 -- # nvmfpid=75811 00:37:49.194 08:34:22 -- nvmf/common.sh@470 -- # waitforlisten 75811 00:37:49.194 08:34:22 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:37:49.194 08:34:22 -- common/autotest_common.sh@819 -- # '[' -z 75811 ']' 00:37:49.194 08:34:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:49.194 08:34:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:37:49.194 08:34:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:49.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:49.194 08:34:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:37:49.194 08:34:22 -- common/autotest_common.sh@10 -- # set +x 00:37:49.194 [2024-04-17 08:34:22.321428] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:37:49.194 [2024-04-17 08:34:22.321495] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:49.194 [2024-04-17 08:34:22.446601] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:49.453 [2024-04-17 08:34:22.546592] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:37:49.453 [2024-04-17 08:34:22.546757] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:49.453 [2024-04-17 08:34:22.546766] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:49.453 [2024-04-17 08:34:22.546772] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:49.453 [2024-04-17 08:34:22.547039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:49.453 [2024-04-17 08:34:22.546910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:37:49.453 [2024-04-17 08:34:22.547041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:37:49.453 [2024-04-17 08:34:22.546966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:37:50.022 08:34:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:37:50.022 08:34:23 -- common/autotest_common.sh@852 -- # return 0 00:37:50.022 08:34:23 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:37:50.022 08:34:23 -- common/autotest_common.sh@718 -- # xtrace_disable 00:37:50.022 08:34:23 -- common/autotest_common.sh@10 -- # set +x 00:37:50.022 08:34:23 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:50.022 08:34:23 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:37:50.022 08:34:23 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:37:50.022 08:34:23 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:37:50.022 08:34:23 -- scripts/common.sh@311 -- # local bdf bdfs 00:37:50.022 08:34:23 -- scripts/common.sh@312 -- # local nvmes 00:37:50.022 08:34:23 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:37:50.022 08:34:23 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:37:50.022 08:34:23 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:37:50.022 08:34:23 -- scripts/common.sh@297 -- # local bdf= 00:37:50.022 08:34:23 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:37:50.022 08:34:23 -- scripts/common.sh@232 -- # local class 00:37:50.022 08:34:23 -- scripts/common.sh@233 -- # local subclass 00:37:50.022 08:34:23 -- scripts/common.sh@234 -- # local progif 00:37:50.022 08:34:23 -- scripts/common.sh@235 -- # printf %02x 1 00:37:50.022 08:34:23 -- scripts/common.sh@235 -- # class=01 00:37:50.022 08:34:23 -- scripts/common.sh@236 -- # printf %02x 8 00:37:50.022 08:34:23 -- scripts/common.sh@236 -- # subclass=08 00:37:50.022 08:34:23 -- scripts/common.sh@237 -- # printf %02x 2 00:37:50.022 08:34:23 -- scripts/common.sh@237 -- # progif=02 00:37:50.022 08:34:23 -- scripts/common.sh@239 -- # hash lspci 00:37:50.022 08:34:23 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:37:50.022 08:34:23 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:37:50.022 08:34:23 -- scripts/common.sh@242 -- # grep -i -- -p02 00:37:50.022 08:34:23 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:37:50.022 08:34:23 -- scripts/common.sh@244 -- # tr -d '"' 00:37:50.022 08:34:23 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:37:50.022 08:34:23 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:37:50.022 08:34:23 -- scripts/common.sh@15 -- # local i 00:37:50.022 08:34:23 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:37:50.022 08:34:23 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:37:50.022 08:34:23 -- scripts/common.sh@24 -- # return 0 00:37:50.022 08:34:23 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:37:50.022 08:34:23 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:37:50.022 08:34:23 -- scripts/common.sh@300 -- # pci_can_use 0000:00:07.0 00:37:50.022 08:34:23 -- scripts/common.sh@15 -- # local i 00:37:50.022 08:34:23 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:37:50.022 08:34:23 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:37:50.022 08:34:23 -- scripts/common.sh@24 -- # return 0 00:37:50.022 08:34:23 -- scripts/common.sh@301 -- # echo 0000:00:07.0 00:37:50.022 08:34:23 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:37:50.022 08:34:23 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:37:50.022 08:34:23 -- scripts/common.sh@322 -- # uname -s 00:37:50.022 08:34:23 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:37:50.022 08:34:23 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:37:50.022 08:34:23 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:37:50.022 08:34:23 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:07.0 ]] 00:37:50.022 08:34:23 -- scripts/common.sh@322 -- # uname -s 00:37:50.022 08:34:23 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:37:50.022 08:34:23 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:37:50.022 08:34:23 -- scripts/common.sh@327 -- # (( 2 )) 00:37:50.022 08:34:23 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:37:50.022 08:34:23 -- target/abort_qd_sizes.sh@79 -- # (( 2 > 0 )) 00:37:50.022 08:34:23 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:00:06.0 00:37:50.022 08:34:23 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:37:50.022 08:34:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:37:50.022 08:34:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:37:50.022 08:34:23 -- common/autotest_common.sh@10 -- # set +x 00:37:50.022 ************************************ 00:37:50.022 START TEST spdk_target_abort 00:37:50.022 ************************************ 00:37:50.022 08:34:23 -- common/autotest_common.sh@1104 -- # spdk_target 00:37:50.022 08:34:23 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:37:50.022 08:34:23 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:37:50.022 08:34:23 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:06.0 -b spdk_target 00:37:50.022 08:34:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:50.022 08:34:23 -- common/autotest_common.sh@10 -- # set +x 00:37:50.281 spdk_targetn1 00:37:50.281 08:34:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:50.281 08:34:23 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:50.281 08:34:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:50.281 08:34:23 -- common/autotest_common.sh@10 -- # set +x 00:37:50.281 [2024-04-17 08:34:23.426201] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:50.281 08:34:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:50.281 08:34:23 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:37:50.281 08:34:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:50.281 08:34:23 -- common/autotest_common.sh@10 -- # set +x 00:37:50.281 08:34:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:50.281 08:34:23 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:37:50.281 08:34:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:50.281 08:34:23 -- common/autotest_common.sh@10 -- # set +x 00:37:50.281 08:34:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:50.281 08:34:23 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:37:50.281 08:34:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:50.281 08:34:23 -- common/autotest_common.sh@10 -- # set +x 00:37:50.281 [2024-04-17 08:34:23.458287] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:50.281 08:34:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:50.281 08:34:23 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:37:50.281 08:34:23 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:37:50.281 08:34:23 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:37:50.281 08:34:23 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:37:50.281 08:34:23 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:37:50.281 08:34:23 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:37:50.281 08:34:23 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:37:50.281 08:34:23 -- target/abort_qd_sizes.sh@24 -- # local target r 00:37:50.281 08:34:23 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:37:50.281 08:34:23 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:50.281 08:34:23 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:37:50.281 08:34:23 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:50.281 08:34:23 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:37:50.281 08:34:23 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:50.281 08:34:23 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:37:50.281 08:34:23 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:50.281 08:34:23 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:50.281 08:34:23 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:50.281 08:34:23 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:37:50.281 08:34:23 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:50.281 08:34:23 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:37:53.572 Initializing NVMe Controllers 00:37:53.572 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:37:53.572 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:37:53.572 Initialization complete. Launching workers. 00:37:53.572 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 12804, failed: 0 00:37:53.572 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1105, failed to submit 11699 00:37:53.572 success 699, unsuccess 406, failed 0 00:37:53.572 08:34:26 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:53.572 08:34:26 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:37:56.865 Initializing NVMe Controllers 00:37:56.865 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:37:56.865 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:37:56.865 Initialization complete. Launching workers. 00:37:56.865 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 9000, failed: 0 00:37:56.865 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1166, failed to submit 7834 00:37:56.865 success 371, unsuccess 795, failed 0 00:37:56.865 08:34:30 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:56.866 08:34:30 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:38:00.179 Initializing NVMe Controllers 00:38:00.179 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:38:00.179 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:38:00.179 Initialization complete. Launching workers. 00:38:00.179 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 32947, failed: 0 00:38:00.179 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2385, failed to submit 30562 00:38:00.179 success 507, unsuccess 1878, failed 0 00:38:00.179 08:34:33 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:38:00.179 08:34:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:00.179 08:34:33 -- common/autotest_common.sh@10 -- # set +x 00:38:00.179 08:34:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:00.179 08:34:33 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:38:00.179 08:34:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:38:00.179 08:34:33 -- common/autotest_common.sh@10 -- # set +x 00:38:00.744 08:34:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:38:00.744 08:34:34 -- target/abort_qd_sizes.sh@62 -- # killprocess 75811 00:38:00.744 08:34:34 -- common/autotest_common.sh@926 -- # '[' -z 75811 ']' 00:38:00.744 08:34:34 -- common/autotest_common.sh@930 -- # kill -0 75811 00:38:00.744 08:34:34 -- common/autotest_common.sh@931 -- # uname 00:38:00.744 08:34:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:38:00.744 08:34:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 75811 00:38:00.744 08:34:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:38:00.744 08:34:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:38:00.744 killing process with pid 75811 00:38:00.744 08:34:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 75811' 00:38:00.744 08:34:34 -- common/autotest_common.sh@945 -- # kill 75811 00:38:00.744 08:34:34 -- common/autotest_common.sh@950 -- # wait 75811 00:38:01.001 00:38:01.002 real 0m10.926s 00:38:01.002 user 0m44.613s 00:38:01.002 sys 0m1.750s 00:38:01.002 08:34:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:01.002 08:34:34 -- common/autotest_common.sh@10 -- # set +x 00:38:01.002 ************************************ 00:38:01.002 END TEST spdk_target_abort 00:38:01.002 ************************************ 00:38:01.002 08:34:34 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:38:01.002 08:34:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:38:01.002 08:34:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:38:01.002 08:34:34 -- common/autotest_common.sh@10 -- # set +x 00:38:01.258 ************************************ 00:38:01.258 START TEST kernel_target_abort 00:38:01.259 ************************************ 00:38:01.259 08:34:34 -- common/autotest_common.sh@1104 -- # kernel_target 00:38:01.259 08:34:34 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:38:01.259 08:34:34 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:38:01.259 08:34:34 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:38:01.259 08:34:34 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:38:01.259 08:34:34 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:38:01.259 08:34:34 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:38:01.259 08:34:34 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:38:01.259 08:34:34 -- nvmf/common.sh@627 -- # local block nvme 00:38:01.259 08:34:34 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:38:01.259 08:34:34 -- nvmf/common.sh@630 -- # modprobe nvmet 00:38:01.259 08:34:34 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:38:01.259 08:34:34 -- nvmf/common.sh@635 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:38:01.516 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:38:01.516 Waiting for block devices as requested 00:38:01.516 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:38:01.774 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:38:01.774 08:34:34 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:38:01.774 08:34:34 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:38:01.774 08:34:34 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:38:01.774 08:34:34 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:38:01.774 08:34:34 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:38:01.774 No valid GPT data, bailing 00:38:01.774 08:34:35 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:38:01.774 08:34:35 -- scripts/common.sh@393 -- # pt= 00:38:01.774 08:34:35 -- scripts/common.sh@394 -- # return 1 00:38:01.774 08:34:35 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:38:01.774 08:34:35 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:38:01.774 08:34:35 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n1 ]] 00:38:01.774 08:34:35 -- nvmf/common.sh@640 -- # block_in_use nvme1n1 00:38:01.774 08:34:35 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:38:01.774 08:34:35 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:38:01.774 No valid GPT data, bailing 00:38:01.774 08:34:35 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:38:02.032 08:34:35 -- scripts/common.sh@393 -- # pt= 00:38:02.032 08:34:35 -- scripts/common.sh@394 -- # return 1 00:38:02.032 08:34:35 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n1 00:38:02.032 08:34:35 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:38:02.032 08:34:35 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n2 ]] 00:38:02.032 08:34:35 -- nvmf/common.sh@640 -- # block_in_use nvme1n2 00:38:02.032 08:34:35 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:38:02.032 08:34:35 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:38:02.032 No valid GPT data, bailing 00:38:02.032 08:34:35 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:38:02.032 08:34:35 -- scripts/common.sh@393 -- # pt= 00:38:02.032 08:34:35 -- scripts/common.sh@394 -- # return 1 00:38:02.032 08:34:35 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n2 00:38:02.032 08:34:35 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:38:02.032 08:34:35 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n3 ]] 00:38:02.032 08:34:35 -- nvmf/common.sh@640 -- # block_in_use nvme1n3 00:38:02.032 08:34:35 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:38:02.032 08:34:35 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:38:02.032 No valid GPT data, bailing 00:38:02.032 08:34:35 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:38:02.032 08:34:35 -- scripts/common.sh@393 -- # pt= 00:38:02.032 08:34:35 -- scripts/common.sh@394 -- # return 1 00:38:02.032 08:34:35 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n3 00:38:02.032 08:34:35 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme1n3 ]] 00:38:02.032 08:34:35 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:38:02.032 08:34:35 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:38:02.032 08:34:35 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:38:02.032 08:34:35 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:38:02.032 08:34:35 -- nvmf/common.sh@654 -- # echo 1 00:38:02.032 08:34:35 -- nvmf/common.sh@655 -- # echo /dev/nvme1n3 00:38:02.032 08:34:35 -- nvmf/common.sh@656 -- # echo 1 00:38:02.032 08:34:35 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:38:02.032 08:34:35 -- nvmf/common.sh@663 -- # echo tcp 00:38:02.032 08:34:35 -- nvmf/common.sh@664 -- # echo 4420 00:38:02.032 08:34:35 -- nvmf/common.sh@665 -- # echo ipv4 00:38:02.032 08:34:35 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:38:02.032 08:34:35 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2ce38300-f67f-48af-81f9-d51a7c54746d --hostid=2ce38300-f67f-48af-81f9-d51a7c54746d -a 10.0.0.1 -t tcp -s 4420 00:38:02.032 00:38:02.032 Discovery Log Number of Records 2, Generation counter 2 00:38:02.032 =====Discovery Log Entry 0====== 00:38:02.032 trtype: tcp 00:38:02.032 adrfam: ipv4 00:38:02.032 subtype: current discovery subsystem 00:38:02.032 treq: not specified, sq flow control disable supported 00:38:02.032 portid: 1 00:38:02.032 trsvcid: 4420 00:38:02.032 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:38:02.032 traddr: 10.0.0.1 00:38:02.032 eflags: none 00:38:02.032 sectype: none 00:38:02.032 =====Discovery Log Entry 1====== 00:38:02.032 trtype: tcp 00:38:02.032 adrfam: ipv4 00:38:02.032 subtype: nvme subsystem 00:38:02.032 treq: not specified, sq flow control disable supported 00:38:02.032 portid: 1 00:38:02.032 trsvcid: 4420 00:38:02.032 subnqn: kernel_target 00:38:02.032 traddr: 10.0.0.1 00:38:02.032 eflags: none 00:38:02.032 sectype: none 00:38:02.032 08:34:35 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:38:02.032 08:34:35 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:38:02.032 08:34:35 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:38:02.032 08:34:35 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:38:02.032 08:34:35 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:38:02.032 08:34:35 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:38:02.032 08:34:35 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:38:02.032 08:34:35 -- target/abort_qd_sizes.sh@24 -- # local target r 00:38:02.032 08:34:35 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:38:02.032 08:34:35 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:02.032 08:34:35 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:38:02.032 08:34:35 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:02.032 08:34:35 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:38:02.032 08:34:35 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:02.032 08:34:35 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:38:02.032 08:34:35 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:02.032 08:34:35 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:38:02.032 08:34:35 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:02.032 08:34:35 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:38:02.032 08:34:35 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:02.032 08:34:35 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:38:05.313 Initializing NVMe Controllers 00:38:05.313 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:38:05.313 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:38:05.313 Initialization complete. Launching workers. 00:38:05.313 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 41154, failed: 0 00:38:05.313 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 41154, failed to submit 0 00:38:05.313 success 0, unsuccess 41154, failed 0 00:38:05.313 08:34:38 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:05.313 08:34:38 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:38:08.605 Initializing NVMe Controllers 00:38:08.605 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:38:08.605 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:38:08.605 Initialization complete. Launching workers. 00:38:08.605 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 80572, failed: 0 00:38:08.605 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 37591, failed to submit 42981 00:38:08.605 success 0, unsuccess 37591, failed 0 00:38:08.605 08:34:41 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:08.605 08:34:41 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:38:11.956 Initializing NVMe Controllers 00:38:11.956 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:38:11.956 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:38:11.956 Initialization complete. Launching workers. 00:38:11.956 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 99509, failed: 0 00:38:11.956 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 24866, failed to submit 74643 00:38:11.956 success 0, unsuccess 24866, failed 0 00:38:11.956 08:34:44 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:38:11.956 08:34:44 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:38:11.956 08:34:44 -- nvmf/common.sh@677 -- # echo 0 00:38:11.956 08:34:44 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:38:11.956 08:34:44 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:38:11.956 08:34:44 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:38:11.956 08:34:44 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:38:11.956 08:34:44 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:38:11.956 08:34:44 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:38:11.956 00:38:11.956 real 0m10.539s 00:38:11.956 user 0m6.360s 00:38:11.956 sys 0m1.887s 00:38:11.956 08:34:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:11.956 08:34:44 -- common/autotest_common.sh@10 -- # set +x 00:38:11.956 ************************************ 00:38:11.956 END TEST kernel_target_abort 00:38:11.956 ************************************ 00:38:11.956 08:34:44 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:38:11.956 08:34:44 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:38:11.956 08:34:44 -- nvmf/common.sh@476 -- # nvmfcleanup 00:38:11.956 08:34:44 -- nvmf/common.sh@116 -- # sync 00:38:11.956 08:34:44 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:38:11.956 08:34:44 -- nvmf/common.sh@119 -- # set +e 00:38:11.956 08:34:44 -- nvmf/common.sh@120 -- # for i in {1..20} 00:38:11.956 08:34:44 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:38:11.956 rmmod nvme_tcp 00:38:11.956 rmmod nvme_fabrics 00:38:11.956 rmmod nvme_keyring 00:38:11.956 08:34:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:38:11.956 08:34:45 -- nvmf/common.sh@123 -- # set -e 00:38:11.956 08:34:45 -- nvmf/common.sh@124 -- # return 0 00:38:11.956 08:34:45 -- nvmf/common.sh@477 -- # '[' -n 75811 ']' 00:38:11.956 08:34:45 -- nvmf/common.sh@478 -- # killprocess 75811 00:38:11.956 08:34:45 -- common/autotest_common.sh@926 -- # '[' -z 75811 ']' 00:38:11.956 08:34:45 -- common/autotest_common.sh@930 -- # kill -0 75811 00:38:11.956 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (75811) - No such process 00:38:11.956 Process with pid 75811 is not found 00:38:11.956 08:34:45 -- common/autotest_common.sh@953 -- # echo 'Process with pid 75811 is not found' 00:38:11.956 08:34:45 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:38:11.956 08:34:45 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:38:12.525 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:38:12.785 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:38:12.785 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:38:12.785 08:34:46 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:38:12.785 08:34:46 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:38:12.785 08:34:46 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:38:12.785 08:34:46 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:38:12.785 08:34:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:12.785 08:34:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:12.785 08:34:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:12.785 08:34:46 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:38:12.785 00:38:12.785 real 0m25.328s 00:38:12.785 user 0m52.393s 00:38:12.785 sys 0m5.342s 00:38:12.785 08:34:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:12.785 08:34:46 -- common/autotest_common.sh@10 -- # set +x 00:38:12.785 ************************************ 00:38:12.785 END TEST nvmf_abort_qd_sizes 00:38:12.785 ************************************ 00:38:13.045 08:34:46 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:38:13.045 08:34:46 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:38:13.045 08:34:46 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:38:13.045 08:34:46 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:38:13.045 08:34:46 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:38:13.045 08:34:46 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:38:13.045 08:34:46 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:38:13.045 08:34:46 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:38:13.045 08:34:46 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:38:13.045 08:34:46 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:38:13.045 08:34:46 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:38:13.045 08:34:46 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:38:13.045 08:34:46 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:38:13.045 08:34:46 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:38:13.045 08:34:46 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:38:13.045 08:34:46 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:38:13.045 08:34:46 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:38:13.045 08:34:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:38:13.045 08:34:46 -- common/autotest_common.sh@10 -- # set +x 00:38:13.045 08:34:46 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:38:13.045 08:34:46 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:38:13.045 08:34:46 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:38:13.045 08:34:46 -- common/autotest_common.sh@10 -- # set +x 00:38:15.004 INFO: APP EXITING 00:38:15.004 INFO: killing all VMs 00:38:15.004 INFO: killing vhost app 00:38:15.004 INFO: EXIT DONE 00:38:15.945 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:38:15.945 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:38:15.945 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:38:16.513 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:38:16.772 Cleaning 00:38:16.772 Removing: /var/run/dpdk/spdk0/config 00:38:16.772 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:38:16.772 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:38:16.772 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:38:16.772 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:38:16.772 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:38:16.772 Removing: /var/run/dpdk/spdk0/hugepage_info 00:38:16.772 Removing: /var/run/dpdk/spdk1/config 00:38:16.772 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:38:16.772 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:38:16.772 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:38:16.772 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:38:16.772 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:38:16.772 Removing: /var/run/dpdk/spdk1/hugepage_info 00:38:16.772 Removing: /var/run/dpdk/spdk2/config 00:38:16.772 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:38:16.772 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:38:16.772 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:38:16.772 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:38:16.772 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:38:16.772 Removing: /var/run/dpdk/spdk2/hugepage_info 00:38:16.772 Removing: /var/run/dpdk/spdk3/config 00:38:16.772 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:38:16.772 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:38:16.772 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:38:16.772 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:38:16.772 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:38:16.772 Removing: /var/run/dpdk/spdk3/hugepage_info 00:38:16.772 Removing: /var/run/dpdk/spdk4/config 00:38:16.772 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:38:16.772 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:38:16.772 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:38:16.772 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:38:16.772 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:38:16.772 Removing: /var/run/dpdk/spdk4/hugepage_info 00:38:16.772 Removing: /dev/shm/nvmf_trace.0 00:38:16.772 Removing: /dev/shm/spdk_tgt_trace.pid54080 00:38:16.772 Removing: /var/run/dpdk/spdk0 00:38:16.772 Removing: /var/run/dpdk/spdk1 00:38:16.772 Removing: /var/run/dpdk/spdk2 00:38:16.772 Removing: /var/run/dpdk/spdk3 00:38:16.772 Removing: /var/run/dpdk/spdk4 00:38:16.772 Removing: /var/run/dpdk/spdk_pid53942 00:38:16.772 Removing: /var/run/dpdk/spdk_pid54080 00:38:16.772 Removing: /var/run/dpdk/spdk_pid54317 00:38:16.772 Removing: /var/run/dpdk/spdk_pid54502 00:38:16.772 Removing: /var/run/dpdk/spdk_pid54647 00:38:16.772 Removing: /var/run/dpdk/spdk_pid54711 00:38:16.772 Removing: /var/run/dpdk/spdk_pid54780 00:38:16.772 Removing: /var/run/dpdk/spdk_pid54870 00:38:16.772 Removing: /var/run/dpdk/spdk_pid54941 00:38:16.772 Removing: /var/run/dpdk/spdk_pid54979 00:38:17.037 Removing: /var/run/dpdk/spdk_pid55015 00:38:17.037 Removing: /var/run/dpdk/spdk_pid55074 00:38:17.037 Removing: /var/run/dpdk/spdk_pid55186 00:38:17.037 Removing: /var/run/dpdk/spdk_pid55612 00:38:17.037 Removing: /var/run/dpdk/spdk_pid55664 00:38:17.037 Removing: /var/run/dpdk/spdk_pid55710 00:38:17.037 Removing: /var/run/dpdk/spdk_pid55726 00:38:17.037 Removing: /var/run/dpdk/spdk_pid55795 00:38:17.037 Removing: /var/run/dpdk/spdk_pid55811 00:38:17.037 Removing: /var/run/dpdk/spdk_pid55872 00:38:17.037 Removing: /var/run/dpdk/spdk_pid55888 00:38:17.037 Removing: /var/run/dpdk/spdk_pid55934 00:38:17.037 Removing: /var/run/dpdk/spdk_pid55952 00:38:17.038 Removing: /var/run/dpdk/spdk_pid55992 00:38:17.038 Removing: /var/run/dpdk/spdk_pid56010 00:38:17.038 Removing: /var/run/dpdk/spdk_pid56126 00:38:17.038 Removing: /var/run/dpdk/spdk_pid56167 00:38:17.038 Removing: /var/run/dpdk/spdk_pid56235 00:38:17.038 Removing: /var/run/dpdk/spdk_pid56292 00:38:17.038 Removing: /var/run/dpdk/spdk_pid56311 00:38:17.038 Removing: /var/run/dpdk/spdk_pid56375 00:38:17.038 Removing: /var/run/dpdk/spdk_pid56389 00:38:17.038 Removing: /var/run/dpdk/spdk_pid56429 00:38:17.038 Removing: /var/run/dpdk/spdk_pid56443 00:38:17.038 Removing: /var/run/dpdk/spdk_pid56478 00:38:17.038 Removing: /var/run/dpdk/spdk_pid56497 00:38:17.038 Removing: /var/run/dpdk/spdk_pid56532 00:38:17.038 Removing: /var/run/dpdk/spdk_pid56551 00:38:17.038 Removing: /var/run/dpdk/spdk_pid56583 00:38:17.038 Removing: /var/run/dpdk/spdk_pid56605 00:38:17.038 Removing: /var/run/dpdk/spdk_pid56634 00:38:17.038 Removing: /var/run/dpdk/spdk_pid56654 00:38:17.038 Removing: /var/run/dpdk/spdk_pid56688 00:38:17.038 Removing: /var/run/dpdk/spdk_pid56708 00:38:17.038 Removing: /var/run/dpdk/spdk_pid56742 00:38:17.038 Removing: /var/run/dpdk/spdk_pid56762 00:38:17.038 Removing: /var/run/dpdk/spdk_pid56796 00:38:17.038 Removing: /var/run/dpdk/spdk_pid56816 00:38:17.038 Removing: /var/run/dpdk/spdk_pid56845 00:38:17.038 Removing: /var/run/dpdk/spdk_pid56870 00:38:17.038 Removing: /var/run/dpdk/spdk_pid56899 00:38:17.038 Removing: /var/run/dpdk/spdk_pid56924 00:38:17.038 Removing: /var/run/dpdk/spdk_pid56953 00:38:17.038 Removing: /var/run/dpdk/spdk_pid56977 00:38:17.038 Removing: /var/run/dpdk/spdk_pid57007 00:38:17.038 Removing: /var/run/dpdk/spdk_pid57021 00:38:17.038 Removing: /var/run/dpdk/spdk_pid57061 00:38:17.038 Removing: /var/run/dpdk/spdk_pid57075 00:38:17.038 Removing: /var/run/dpdk/spdk_pid57117 00:38:17.038 Removing: /var/run/dpdk/spdk_pid57131 00:38:17.038 Removing: /var/run/dpdk/spdk_pid57171 00:38:17.038 Removing: /var/run/dpdk/spdk_pid57185 00:38:17.038 Removing: /var/run/dpdk/spdk_pid57220 00:38:17.038 Removing: /var/run/dpdk/spdk_pid57242 00:38:17.038 Removing: /var/run/dpdk/spdk_pid57280 00:38:17.038 Removing: /var/run/dpdk/spdk_pid57302 00:38:17.038 Removing: /var/run/dpdk/spdk_pid57340 00:38:17.038 Removing: /var/run/dpdk/spdk_pid57359 00:38:17.038 Removing: /var/run/dpdk/spdk_pid57394 00:38:17.038 Removing: /var/run/dpdk/spdk_pid57408 00:38:17.038 Removing: /var/run/dpdk/spdk_pid57449 00:38:17.038 Removing: /var/run/dpdk/spdk_pid57518 00:38:17.038 Removing: /var/run/dpdk/spdk_pid57600 00:38:17.038 Removing: /var/run/dpdk/spdk_pid57905 00:38:17.038 Removing: /var/run/dpdk/spdk_pid57917 00:38:17.038 Removing: /var/run/dpdk/spdk_pid57954 00:38:17.038 Removing: /var/run/dpdk/spdk_pid57966 00:38:17.038 Removing: /var/run/dpdk/spdk_pid57985 00:38:17.038 Removing: /var/run/dpdk/spdk_pid58003 00:38:17.038 Removing: /var/run/dpdk/spdk_pid58010 00:38:17.038 Removing: /var/run/dpdk/spdk_pid58029 00:38:17.038 Removing: /var/run/dpdk/spdk_pid58047 00:38:17.038 Removing: /var/run/dpdk/spdk_pid58065 00:38:17.312 Removing: /var/run/dpdk/spdk_pid58079 00:38:17.312 Removing: /var/run/dpdk/spdk_pid58097 00:38:17.312 Removing: /var/run/dpdk/spdk_pid58109 00:38:17.312 Removing: /var/run/dpdk/spdk_pid58123 00:38:17.312 Removing: /var/run/dpdk/spdk_pid58141 00:38:17.312 Removing: /var/run/dpdk/spdk_pid58158 00:38:17.312 Removing: /var/run/dpdk/spdk_pid58172 00:38:17.312 Removing: /var/run/dpdk/spdk_pid58190 00:38:17.312 Removing: /var/run/dpdk/spdk_pid58203 00:38:17.312 Removing: /var/run/dpdk/spdk_pid58216 00:38:17.312 Removing: /var/run/dpdk/spdk_pid58250 00:38:17.312 Removing: /var/run/dpdk/spdk_pid58264 00:38:17.312 Removing: /var/run/dpdk/spdk_pid58297 00:38:17.312 Removing: /var/run/dpdk/spdk_pid58353 00:38:17.312 Removing: /var/run/dpdk/spdk_pid58380 00:38:17.312 Removing: /var/run/dpdk/spdk_pid58395 00:38:17.312 Removing: /var/run/dpdk/spdk_pid58418 00:38:17.312 Removing: /var/run/dpdk/spdk_pid58433 00:38:17.312 Removing: /var/run/dpdk/spdk_pid58439 00:38:17.312 Removing: /var/run/dpdk/spdk_pid58484 00:38:17.312 Removing: /var/run/dpdk/spdk_pid58501 00:38:17.312 Removing: /var/run/dpdk/spdk_pid58522 00:38:17.312 Removing: /var/run/dpdk/spdk_pid58535 00:38:17.312 Removing: /var/run/dpdk/spdk_pid58537 00:38:17.312 Removing: /var/run/dpdk/spdk_pid58550 00:38:17.312 Removing: /var/run/dpdk/spdk_pid58560 00:38:17.312 Removing: /var/run/dpdk/spdk_pid58567 00:38:17.312 Removing: /var/run/dpdk/spdk_pid58579 00:38:17.312 Removing: /var/run/dpdk/spdk_pid58582 00:38:17.312 Removing: /var/run/dpdk/spdk_pid58614 00:38:17.312 Removing: /var/run/dpdk/spdk_pid58641 00:38:17.312 Removing: /var/run/dpdk/spdk_pid58650 00:38:17.312 Removing: /var/run/dpdk/spdk_pid58679 00:38:17.312 Removing: /var/run/dpdk/spdk_pid58688 00:38:17.312 Removing: /var/run/dpdk/spdk_pid58700 00:38:17.312 Removing: /var/run/dpdk/spdk_pid58742 00:38:17.312 Removing: /var/run/dpdk/spdk_pid58753 00:38:17.312 Removing: /var/run/dpdk/spdk_pid58780 00:38:17.312 Removing: /var/run/dpdk/spdk_pid58787 00:38:17.312 Removing: /var/run/dpdk/spdk_pid58795 00:38:17.312 Removing: /var/run/dpdk/spdk_pid58803 00:38:17.312 Removing: /var/run/dpdk/spdk_pid58814 00:38:17.312 Removing: /var/run/dpdk/spdk_pid58823 00:38:17.312 Removing: /var/run/dpdk/spdk_pid58830 00:38:17.312 Removing: /var/run/dpdk/spdk_pid58838 00:38:17.312 Removing: /var/run/dpdk/spdk_pid58911 00:38:17.312 Removing: /var/run/dpdk/spdk_pid58953 00:38:17.312 Removing: /var/run/dpdk/spdk_pid59060 00:38:17.312 Removing: /var/run/dpdk/spdk_pid59086 00:38:17.312 Removing: /var/run/dpdk/spdk_pid59132 00:38:17.312 Removing: /var/run/dpdk/spdk_pid59153 00:38:17.312 Removing: /var/run/dpdk/spdk_pid59168 00:38:17.312 Removing: /var/run/dpdk/spdk_pid59182 00:38:17.312 Removing: /var/run/dpdk/spdk_pid59217 00:38:17.312 Removing: /var/run/dpdk/spdk_pid59237 00:38:17.312 Removing: /var/run/dpdk/spdk_pid59304 00:38:17.312 Removing: /var/run/dpdk/spdk_pid59319 00:38:17.312 Removing: /var/run/dpdk/spdk_pid59357 00:38:17.312 Removing: /var/run/dpdk/spdk_pid59442 00:38:17.312 Removing: /var/run/dpdk/spdk_pid59487 00:38:17.312 Removing: /var/run/dpdk/spdk_pid59517 00:38:17.312 Removing: /var/run/dpdk/spdk_pid59609 00:38:17.312 Removing: /var/run/dpdk/spdk_pid59651 00:38:17.312 Removing: /var/run/dpdk/spdk_pid59688 00:38:17.312 Removing: /var/run/dpdk/spdk_pid59898 00:38:17.312 Removing: /var/run/dpdk/spdk_pid60000 00:38:17.312 Removing: /var/run/dpdk/spdk_pid60023 00:38:17.312 Removing: /var/run/dpdk/spdk_pid60339 00:38:17.312 Removing: /var/run/dpdk/spdk_pid60377 00:38:17.312 Removing: /var/run/dpdk/spdk_pid60685 00:38:17.312 Removing: /var/run/dpdk/spdk_pid61079 00:38:17.312 Removing: /var/run/dpdk/spdk_pid61325 00:38:17.312 Removing: /var/run/dpdk/spdk_pid62092 00:38:17.312 Removing: /var/run/dpdk/spdk_pid62910 00:38:17.312 Removing: /var/run/dpdk/spdk_pid63021 00:38:17.571 Removing: /var/run/dpdk/spdk_pid63094 00:38:17.571 Removing: /var/run/dpdk/spdk_pid64333 00:38:17.571 Removing: /var/run/dpdk/spdk_pid64542 00:38:17.571 Removing: /var/run/dpdk/spdk_pid64842 00:38:17.571 Removing: /var/run/dpdk/spdk_pid64956 00:38:17.571 Removing: /var/run/dpdk/spdk_pid65084 00:38:17.571 Removing: /var/run/dpdk/spdk_pid65112 00:38:17.571 Removing: /var/run/dpdk/spdk_pid65139 00:38:17.571 Removing: /var/run/dpdk/spdk_pid65167 00:38:17.571 Removing: /var/run/dpdk/spdk_pid65263 00:38:17.571 Removing: /var/run/dpdk/spdk_pid65393 00:38:17.571 Removing: /var/run/dpdk/spdk_pid65539 00:38:17.571 Removing: /var/run/dpdk/spdk_pid65620 00:38:17.571 Removing: /var/run/dpdk/spdk_pid66005 00:38:17.571 Removing: /var/run/dpdk/spdk_pid66344 00:38:17.571 Removing: /var/run/dpdk/spdk_pid66351 00:38:17.571 Removing: /var/run/dpdk/spdk_pid68545 00:38:17.571 Removing: /var/run/dpdk/spdk_pid68552 00:38:17.571 Removing: /var/run/dpdk/spdk_pid68827 00:38:17.571 Removing: /var/run/dpdk/spdk_pid68847 00:38:17.571 Removing: /var/run/dpdk/spdk_pid68861 00:38:17.571 Removing: /var/run/dpdk/spdk_pid68892 00:38:17.571 Removing: /var/run/dpdk/spdk_pid68897 00:38:17.571 Removing: /var/run/dpdk/spdk_pid68981 00:38:17.571 Removing: /var/run/dpdk/spdk_pid68983 00:38:17.571 Removing: /var/run/dpdk/spdk_pid69091 00:38:17.571 Removing: /var/run/dpdk/spdk_pid69093 00:38:17.571 Removing: /var/run/dpdk/spdk_pid69207 00:38:17.571 Removing: /var/run/dpdk/spdk_pid69214 00:38:17.571 Removing: /var/run/dpdk/spdk_pid69595 00:38:17.571 Removing: /var/run/dpdk/spdk_pid69638 00:38:17.571 Removing: /var/run/dpdk/spdk_pid69722 00:38:17.571 Removing: /var/run/dpdk/spdk_pid69777 00:38:17.571 Removing: /var/run/dpdk/spdk_pid70069 00:38:17.571 Removing: /var/run/dpdk/spdk_pid70270 00:38:17.571 Removing: /var/run/dpdk/spdk_pid70647 00:38:17.571 Removing: /var/run/dpdk/spdk_pid71168 00:38:17.571 Removing: /var/run/dpdk/spdk_pid71615 00:38:17.571 Removing: /var/run/dpdk/spdk_pid71670 00:38:17.571 Removing: /var/run/dpdk/spdk_pid71730 00:38:17.571 Removing: /var/run/dpdk/spdk_pid71790 00:38:17.571 Removing: /var/run/dpdk/spdk_pid71906 00:38:17.571 Removing: /var/run/dpdk/spdk_pid71967 00:38:17.571 Removing: /var/run/dpdk/spdk_pid72027 00:38:17.571 Removing: /var/run/dpdk/spdk_pid72083 00:38:17.571 Removing: /var/run/dpdk/spdk_pid72397 00:38:17.571 Removing: /var/run/dpdk/spdk_pid73554 00:38:17.571 Removing: /var/run/dpdk/spdk_pid73699 00:38:17.571 Removing: /var/run/dpdk/spdk_pid73942 00:38:17.571 Removing: /var/run/dpdk/spdk_pid74496 00:38:17.571 Removing: /var/run/dpdk/spdk_pid74659 00:38:17.571 Removing: /var/run/dpdk/spdk_pid74817 00:38:17.571 Removing: /var/run/dpdk/spdk_pid74914 00:38:17.571 Removing: /var/run/dpdk/spdk_pid75093 00:38:17.571 Removing: /var/run/dpdk/spdk_pid75203 00:38:17.571 Removing: /var/run/dpdk/spdk_pid75866 00:38:17.571 Removing: /var/run/dpdk/spdk_pid75903 00:38:17.571 Removing: /var/run/dpdk/spdk_pid75938 00:38:17.571 Removing: /var/run/dpdk/spdk_pid76188 00:38:17.571 Removing: /var/run/dpdk/spdk_pid76219 00:38:17.571 Removing: /var/run/dpdk/spdk_pid76254 00:38:17.572 Clean 00:38:17.831 killing process with pid 48089 00:38:17.831 killing process with pid 48093 00:38:17.831 08:34:50 -- common/autotest_common.sh@1436 -- # return 0 00:38:17.831 08:34:50 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:38:17.831 08:34:50 -- common/autotest_common.sh@718 -- # xtrace_disable 00:38:17.831 08:34:50 -- common/autotest_common.sh@10 -- # set +x 00:38:17.831 08:34:51 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:38:17.831 08:34:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:38:17.831 08:34:51 -- common/autotest_common.sh@10 -- # set +x 00:38:17.831 08:34:51 -- spdk/autotest.sh@390 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:38:17.831 08:34:51 -- spdk/autotest.sh@392 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:38:17.831 08:34:51 -- spdk/autotest.sh@392 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:38:17.831 08:34:51 -- spdk/autotest.sh@394 -- # hash lcov 00:38:17.831 08:34:51 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:38:17.832 08:34:51 -- spdk/autotest.sh@396 -- # hostname 00:38:17.832 08:34:51 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1705279005-2131 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:38:18.091 geninfo: WARNING: invalid characters removed from testname! 00:38:44.692 08:35:15 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:38:45.296 08:35:18 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:38:47.831 08:35:20 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:38:50.384 08:35:23 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:38:52.918 08:35:26 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:38:55.456 08:35:28 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:38:57.361 08:35:30 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:38:57.361 08:35:30 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:38:57.361 08:35:30 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:38:57.361 08:35:30 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:57.361 08:35:30 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:57.361 08:35:30 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:57.361 08:35:30 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:57.361 08:35:30 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:57.361 08:35:30 -- paths/export.sh@5 -- $ export PATH 00:38:57.361 08:35:30 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:57.361 08:35:30 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:38:57.361 08:35:30 -- common/autobuild_common.sh@435 -- $ date +%s 00:38:57.361 08:35:30 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713342930.XXXXXX 00:38:57.361 08:35:30 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713342930.L3eIPh 00:38:57.361 08:35:30 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:38:57.361 08:35:30 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:38:57.361 08:35:30 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:38:57.361 08:35:30 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:38:57.361 08:35:30 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:38:57.361 08:35:30 -- common/autobuild_common.sh@451 -- $ get_config_params 00:38:57.361 08:35:30 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:38:57.361 08:35:30 -- common/autotest_common.sh@10 -- $ set +x 00:38:57.361 08:35:30 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-uring' 00:38:57.361 08:35:30 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:38:57.361 08:35:30 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:38:57.361 08:35:30 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:38:57.361 08:35:30 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:38:57.361 08:35:30 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:38:57.361 08:35:30 -- spdk/autopackage.sh@19 -- $ timing_finish 00:38:57.361 08:35:30 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:38:57.361 08:35:30 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:38:57.361 08:35:30 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:38:57.361 08:35:30 -- spdk/autopackage.sh@20 -- $ exit 0 00:38:57.361 + [[ -n 5297 ]] 00:38:57.361 + sudo kill 5297 00:38:57.629 [Pipeline] } 00:38:57.648 [Pipeline] // timeout 00:38:57.654 [Pipeline] } 00:38:57.673 [Pipeline] // stage 00:38:57.679 [Pipeline] } 00:38:57.696 [Pipeline] // catchError 00:38:57.705 [Pipeline] stage 00:38:57.707 [Pipeline] { (Stop VM) 00:38:57.719 [Pipeline] sh 00:38:57.994 + vagrant halt 00:39:01.383 ==> default: Halting domain... 00:39:07.967 [Pipeline] sh 00:39:08.260 + vagrant destroy -f 00:39:11.550 ==> default: Removing domain... 00:39:11.562 [Pipeline] sh 00:39:11.845 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/output 00:39:11.854 [Pipeline] } 00:39:11.871 [Pipeline] // stage 00:39:11.876 [Pipeline] } 00:39:11.894 [Pipeline] // dir 00:39:11.899 [Pipeline] } 00:39:11.916 [Pipeline] // wrap 00:39:11.922 [Pipeline] } 00:39:11.937 [Pipeline] // catchError 00:39:11.945 [Pipeline] stage 00:39:11.947 [Pipeline] { (Epilogue) 00:39:11.960 [Pipeline] sh 00:39:12.240 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:39:18.816 [Pipeline] catchError 00:39:18.818 [Pipeline] { 00:39:18.832 [Pipeline] sh 00:39:19.115 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:39:19.115 Artifacts sizes are good 00:39:19.124 [Pipeline] } 00:39:19.141 [Pipeline] // catchError 00:39:19.154 [Pipeline] archiveArtifacts 00:39:19.161 Archiving artifacts 00:39:19.327 [Pipeline] cleanWs 00:39:19.339 [WS-CLEANUP] Deleting project workspace... 00:39:19.339 [WS-CLEANUP] Deferred wipeout is used... 00:39:19.345 [WS-CLEANUP] done 00:39:19.347 [Pipeline] } 00:39:19.367 [Pipeline] // stage 00:39:19.373 [Pipeline] } 00:39:19.390 [Pipeline] // node 00:39:19.397 [Pipeline] End of Pipeline 00:39:19.433 Finished: SUCCESS